mistral-6.0.0/0000775000175100017510000000000013245513605013220 5ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/0000775000175100017510000000000013245513604015710 5ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/notes/0000775000175100017510000000000013245513605017041 5ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/notes/changing-isolation-level-to-read-committed-7080833ad284b901.yaml0000666000175100017510000000624713245513262032126 0ustar zuulzuul00000000000000--- fixes: - | [`bug 1518012 `_] [`bug 1513456 `_] Fix concurrency issues by using READ_COMMITTED This release note describes bugs: * #1513456 - task stuck in RUNNING state when all action executions are finished regarding the problem and the fix. * #1518012- WF execution stays in RUNNING although task and action executions are in SUCCESS. This fix does not require any action from Mistral users and does not have any implications other than the bug fix. The state of a workflow execution was not updated even when all task executions were completed if some tasks finished at the same time as other tasks. Because we were using our connections with transaction isolation level = REPEATABLE_READ - Each process was using a snapshot of the DB created at the first read statement in that transaction. When a task finished and evaluated the state of all the other tasks it did not see the up-to-date state of those tasks - and so, because not all tasks were completed - the task did not change the workflow execution state. Similar behavior happened with multiple action executions under same task. On completion, each action execution checked the status of the other action executions and did not see the up-to-date state of these action execution - causing task execution to stay in RUNNING state. The solution is to change DB transaction isolation level from REPEATABLE_READ to READ_COMMITTED so process A can see changes committed in other transactions even if process A is in the middle of a transaction. A short explanation regarding the different isolation levels: - | REPEATABLE_READ - while in transaction, the first read operation to the DB creates a snapshot of the entire DB so you are guarantee that all the data in the DB will remain the same until the end of the transaction. REPEATABLE_READ example: * ConnectionA selects from tableA in a transaction. * ConnectionB deletes all rows from tableB in a transaction. * ConnectionB commits. * ConnectionA loops over the rows of tableA and fetches from tableB using the tableA_tableB_FK - ConnectionA will get rows from tableB. - | READ_COMMITTED - while in a transaction, every query to the DB will get the committed data. READ_COMMITTED example: * ConnectionA starts a transaction. * ConnectionB starts a transaction. * ConnectionA insert row to tableA and commits. * ConnectionB insert row to tableA. * ConnectionB selects tableA and gets two rows. * ConnectionB commits / rollback. Two good articles about isolation levels are: * `Differences between READ-COMMITTED and REPEATABLE-READ transaction isolation levels `_. * `MySQL performance implications of InnoDB isolation modes `_. mistral-6.0.0/releasenotes/notes/ironic-api-newton-9397da8135bb97b4.yaml0000666000175100017510000000073013245513262025441 0ustar zuulzuul00000000000000--- features: - It is now possible to use the Bare metal (Ironic) API features introduced in API version 1.10 to 1.22. upgrade: - Required Ironic API version was bumped to '1.22' (corresponding to Ironic 6.2.0 - Newton final release). - Due to the default Ironic API version change to '1.22', new bare metal nodes created with 'node_create' action appears in "enroll" provision state instead of "available". Please update your workflows accordingly. mistral-6.0.0/releasenotes/notes/mistral-murano-actions-2250f745aaf8536a.yaml0000666000175100017510000000006413245513262026471 0ustar zuulzuul00000000000000--- features: - Murano actions are now supported. mistral-6.0.0/releasenotes/notes/add-action-region-to-actions-353f6c4b10f76677.yaml0000666000175100017510000000063613245513262027362 0ustar zuulzuul00000000000000--- features: - Support to specify 'action_region' for OpenStack actions so that it's possible to operate different resources in different regions in one single workflow. upgrade: - Run ``python tools/sync_db.py --config-file `` to re-populate database. deprecations: - The config option 'os-actions-endpoint-type' is moved from DEFAULT group to 'openstack_actions' group. mistral-6.0.0/releasenotes/notes/support-env-in-adhoc-actions-20c98598893aa19f.yaml0000666000175100017510000000042713245513262027450 0ustar zuulzuul00000000000000--- fixes: - Added support for referencing task and workflow context data, including environment variables via env(), when using YAQL/Jinja2 expressions inside AdHoc Actions. YAQL/Jinja2 expressions can reference env() and other context data in the base-input section. mistral-6.0.0/releasenotes/notes/update-retry-policy-fb5e73ce717ed066.yaml0000666000175100017510000000057313245513262026162 0ustar zuulzuul00000000000000--- critical: - Due to bug https://bugs.launchpad.net/mistral/+bug/1631140, Mistral was not considering retry with value 1. After this bug is fixed, Mistral now considers count value 1 as a value for retry. - Mistral does not consider the initial task run as a retry but only considers the retry value after the failure of initial task execution. mistral-6.0.0/releasenotes/notes/mistral-customize-authorization-d6b9a965f3056f09.yaml0000666000175100017510000000006613245513262030473 0ustar zuulzuul00000000000000--- features: - Role base access control was added. mistral-6.0.0/releasenotes/notes/mistral-api-server-https-716a6d741893dd23.yaml0000666000175100017510000000012113245513262026671 0ustar zuulzuul00000000000000--- features: - Mistral API server can be configured to handle https requests. mistral-6.0.0/releasenotes/notes/validate-ad-hoc-action-api-added-6d7eaaedbe8129a7.yaml0000666000175100017510000000010213245513262030363 0ustar zuulzuul00000000000000--- features: - New API for validating ad-hoc actions was added.././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000mistral-6.0.0/releasenotes/notes/workflow-create-instance-YaqlEvaluationException-e22afff26a193c4f.yamlmistral-6.0.0/releasenotes/notes/workflow-create-instance-YaqlEvaluationException-e22afff26a193c4f.y0000666000175100017510000000012013245513262033331 0ustar zuulzuul00000000000000--- fixes: - Fix for YaqlEvaluationException in std.create_instance workflow. mistral-6.0.0/releasenotes/notes/external_openstack_action_mapping_support-5cec5d9d5192feb7.yaml0000666000175100017510000000051113245513262033050 0ustar zuulzuul00000000000000--- features: - External OpenStack action mapping file could be specified at sync_db.sh or mistral-db-mange script. For more details see 'sync_db.sh --help' or 'mistral-db-manage --help'. - From now it is optional to list openstack modules in mapping file which you would not include into supported action set. mistral-6.0.0/releasenotes/notes/policy-and-doc-in-code-9f1737c474998991.yaml0000666000175100017510000000117313245513262026023 0ustar zuulzuul00000000000000--- features: - | Mistral now support policy in code, which means if users didn't modify any of policy rules, they can leave policy file (in `json` or `yaml` format) empty or just remove it all together. Because from now, Mistral keeps all default policies under `mistral/policies` module. Users can still modify/generate `policy.yaml` file which will override policy rules in code if those rules show in `policy.yaml` file. other: - | Default `policy.json` file is now removed as Mistral now generate the default policies in code. Please be aware that when using that file in your environment. mistral-6.0.0/releasenotes/notes/add_public_event_triggers-ab6249ca85fd5497.yaml0000666000175100017510000000043713245513262027360 0ustar zuulzuul00000000000000--- features: - | Added ability to create public event triggers. Public event triggers are applied to all projects, i.e. workflows are triggered by event in any project. Currently public event triggers may be created only by admin but it can be changed in policy.json. mistral-6.0.0/releasenotes/notes/workflow-sharing-746255cda20c48d2.yaml0000666000175100017510000000044413245513262025364 0ustar zuulzuul00000000000000--- features: - | Add support for `workflow sharing`_ feature. users of one project can share workflows to other projects using this feature. .. _workflow sharing: https://specs.openstack.org/openstack/mistral-specs/specs/mitaka/approved/mistral-workflow-resource-sharing.html mistral-6.0.0/releasenotes/notes/add-json-dump-deprecate-json-pp-252c6c495fd2dea1.yaml0000666000175100017510000000055413245513262030205 0ustar zuulzuul00000000000000--- features: - | A new YAQL/jinja2 expression function has been added for outputting JSON. It is json_dump and accepts one argument, which is the object to be serialised to JSON. deprecations: - | The YAQL/jinja2 expression function ``json_pp`` has been deprecated and will be removed in the S cycle. ``json_dump`` should be used instead. mistral-6.0.0/releasenotes/notes/use-workflow-uuid-30d5e51c6ac57f1d.yaml0000666000175100017510000000037613245513262025634 0ustar zuulzuul00000000000000--- deprecations: - Usage of workflow name in the system(e.g. creating executions/cron-triggers , workfow CRUD operations, etc.) is deprecated, please use workflow UUID instead. The workflow sharing feature can only be used with workflow UUID. mistral-6.0.0/releasenotes/notes/mistral-gnocchi-actions-f26fd76b8a4df40e.yaml0000666000175100017510000000006513245513262027034 0ustar zuulzuul00000000000000--- features: - Gnocchi actions are now supported. mistral-6.0.0/releasenotes/notes/alternative-rpc-layer-21ca7f6171c8f628.yaml0000666000175100017510000000037513245513262026306 0ustar zuulzuul00000000000000--- features: - Mistral now support usage of alternative RPC layer, that calls RabbitMQ directly instead of using Oslo. - Tasks support new flag 'safe-rerun'. If it is set to 'true', a task would be re-run if executor dies during execution. mistral-6.0.0/releasenotes/notes/support-created-at-yaql-function-execution-6ece8eaf34664c38.yaml0000666000175100017510000000021113245513262032544 0ustar zuulzuul00000000000000--- features: - Mistral action developer can get the start time of a workflow execution by using ``<% execution().created_at %>``. mistral-6.0.0/releasenotes/notes/transition-message-8dc4dd99240bd0f7.yaml0000666000175100017510000000024213245513262026043 0ustar zuulzuul00000000000000--- features: - | Now user can provide custom message for fail/pause/success transition. e.g. - fail(msg='error in task'): <% condition if any %> ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000mistral-6.0.0/releasenotes/notes/include-output-paramter-in-action-execution-list-c946f1b38dc5a052.yamlmistral-6.0.0/releasenotes/notes/include-output-paramter-in-action-execution-list-c946f1b38dc5a052.y0000666000175100017510000000075513245513262033101 0ustar zuulzuul00000000000000--- features: - | New parameter called 'include_output' added to action execution api. By default output field does not return when calling list action executions API critical: - | By default, output field will not return when calling list action executions. In the previous version it did, so if a user used this, and/or wants to get output field when calling list action executions API, it will be possible only by using the new include output parameter. mistral-6.0.0/releasenotes/notes/tacket-actions-support-2b4cee2644313cb3.yaml0000666000175100017510000000006413245513262026552 0ustar zuulzuul00000000000000--- features: - Tacker actions are now supported. mistral-6.0.0/releasenotes/notes/support-manage-cron-trigger-by-id-ab544e8068b84967.yaml0000666000175100017510000000010313245513262030360 0ustar zuulzuul00000000000000--- features: - Support to manage a cron-trigger instance by id. mistral-6.0.0/releasenotes/notes/update-mistral-docker-image-0c6294fc021545e0.yaml0000666000175100017510000000051513245513262027253 0ustar zuulzuul00000000000000--- features: - The Mistral docker image and tooling has been updated to significantly ease the starting of a Mistral cluster. The setup now supports all-in-one and multi-container deployments. Also, the scripts were cleaned up and aligned with the Docker best practice. fixes: - Javascript support in docker image. mistral-6.0.0/releasenotes/notes/mistral-docker-image-9d6e04ac928289dd.yaml0000666000175100017510000000014513245513262026161 0ustar zuulzuul00000000000000--- prelude: > Pre-installed Mistral docker image is now available to get quick idea of Mistral. mistral-6.0.0/releasenotes/notes/mistral-aodh-actions-e4c2b7598d2e39ef.yaml0000666000175100017510000000006213245513262026262 0ustar zuulzuul00000000000000--- features: - Aodh actions are now supported. ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000mistral-6.0.0/releasenotes/notes/function-called-tasks-available-in-an-expression-17ca83d797ffb3ab.yamlmistral-6.0.0/releasenotes/notes/function-called-tasks-available-in-an-expression-17ca83d797ffb3ab.y0000666000175100017510000000045513245513262033130 0ustar zuulzuul00000000000000--- features: - | New function, called tasks, available from within an expression (Yaql, Jinja2). This function allows to filter all tasks of a user by workflow execution id and/or state. In addition it is possible to get tasks recursively and flatten the tasks list. mistral-6.0.0/releasenotes/notes/mistral-senlin-actions-f3fe359c4e91de01.yaml0000666000175100017510000000006413245513262026627 0ustar zuulzuul00000000000000--- features: - Senlin actions are now supported. mistral-6.0.0/releasenotes/notes/evaluate_env_parameter-14baa54c860da11c.yaml0000666000175100017510000000143413245513262026720 0ustar zuulzuul00000000000000--- fixes: - When we pass a workflow environment to workflow parameters using 'env' Mistral first evaluates it assuming that it can contain expressions (YAQL/Jinja) For example, one environment variable can be expressed through the other. In some cases it causes problems. For example, if the environment is too big and has many expressions, especially something like <% $ %> or <% env() %>. Also, in some cases we don't want any evaluations to happen if we want to have some informative text in the environment containing expressions. In order to address that the 'evaluate_env' workflow parameter was added, defaulting to True for backwards compatibility. If it's set to False then it disables evaluation of expressions in the environment. mistral-6.0.0/releasenotes/notes/keycloak-auth-support-74131b49e2071762.yaml0000666000175100017510000000015613245513262026115 0ustar zuulzuul00000000000000--- features: - Mistral now supports authentication with KeyCloak server using OpenId Connect protocol. mistral-6.0.0/releasenotes/notes/yaml-json-parse-53217627a647dc1d.yaml0000666000175100017510000000024313245513262025022 0ustar zuulzuul00000000000000--- features: - | Add yaml_parse and json_parse expression functions. Each accepts a string and will parse as either json or yaml, and return an object. mistral-6.0.0/releasenotes/notes/region-name-support-9e4b4ccd963ace88.yaml0000666000175100017510000000042113245513262026230 0ustar zuulzuul00000000000000--- fixes: - | [`bug 1633345 `_] User now could define the target region for the openstack actions. It could be done via API in X-Region-Name and X-Target-Region-Name in case of multi-vim feature is used. mistral-6.0.0/releasenotes/notes/new-service-actions-support-47279bd649732632.yaml0000666000175100017510000000107313245513262027260 0ustar zuulzuul00000000000000--- prelude: > Actions of several OpenStack services are supported out of the box in Mitaka, including Barbican, Cinder(V2), Swift, Trove, Zaqar and Mistral. upgrade: - During an upgrade to Mitaka, operators or administrators need to run ``python tools/get_action_list.py `` command to generate service action names and values for updating ``mistral/actions/openstack/mapping.json``, then, run ``python tools/sync_db.py`` to populate database. Please note, some services like Neutron, Swift, Zaqar don't support the command yet. mistral-6.0.0/releasenotes/notes/create-and-run-workflows-within-namespaces-e4fba869a889f55f.yaml0000666000175100017510000000153313245513262032533 0ustar zuulzuul00000000000000--- features: - | Creating and running workflows within a namespace. Workflows with the same name can be added to the same project as long as they are within a different namespace. This feature is backwards compatible. All existing workflows are assumed to be in the default namespace, represented by an empty string. Also if a workflow is created without a namespace spcified, it is assumed to be in the defualt namespace. When a workflow is being executed, the namespace is saved under params and passed to all its sub workflow executions. When looking for the next sub-workflow to run, the correct workflow will be found by name and namespace, where the namespace can be the workflow namespace or the default namespace. Workflows in the same namespace as the top workflow will be given a higher priority.mistral-6.0.0/releasenotes/notes/mistral-tempest-plugin-2f6dcbceb4d27eb0.yaml0000666000175100017510000000021513245513262027060 0ustar zuulzuul00000000000000--- prelude: > Tempest plugin has been implemented. Now Mistral tests can be run from Mistral repo as well as from Tempest repo. mistral-6.0.0/releasenotes/notes/.placeholder0000666000175100017510000000000013245513262021313 0ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/notes/changing-context-in-delayed-calls-78d8e9a622fe3fe9.yaml0000666000175100017510000000040213245513262030612 0ustar zuulzuul00000000000000--- security: - > [`bug 1521802 `_] Fixing the problem that sometimes sub-workflow executions were run/saved under the wrong tenant in cron trigger periodic task in multi-tenancy deployment. mistral-6.0.0/releasenotes/notes/role-based-resource-access-control-3579714be15d9b0b.yaml0000666000175100017510000000022513245513262030643 0ustar zuulzuul00000000000000--- features: - By default, admin user could get/list/update/delete other projects' resources. In Pike, only workflow/execution are supported. mistral-6.0.0/releasenotes/notes/magnum-actions-support-b131fa942b937fa5.yaml0000666000175100017510000000033613245513262026577 0ustar zuulzuul00000000000000--- features: - Magnum action are now supported. upgrade: - During an upgrade to Newton, operators or administrators need to run ``python tools/sync_db.py`` to populate database with Magnum action definitions. mistral-6.0.0/releasenotes/notes/drop-ceilometerclient-b33330a28906759e.yaml0000666000175100017510000000023413245513262026215 0ustar zuulzuul00000000000000--- fixes: - | Remove ceilometerclient requirement. This library is not maintained and the ceilometer api is dead. So lets drop this integration. mistral-6.0.0/releasenotes/source/0000775000175100017510000000000013245513605017211 5ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/source/newton.rst0000666000175100017510000000026613245513262021262 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton :earliest-version: 3.0.0 mistral-6.0.0/releasenotes/source/_static/0000775000175100017510000000000013245513605020637 5ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/source/_static/.placeholder0000666000175100017510000000000013245513262023111 0ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/source/liberty.rst0000666000175100017510000000022213245513262021412 0ustar zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/liberty mistral-6.0.0/releasenotes/source/pike.rst0000666000175100017510000000021713245513262020674 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike mistral-6.0.0/releasenotes/source/conf.py0000666000175100017510000002152413245513262020515 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Mistral Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # Add any paths that contain templates here, relative to this directory. # templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Mistral Release Notes' copyright = u'2015, Mistral Developers' # Release notes are version independent release = '' version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # Must set this variable to include year, month, day, hours, and minutes. html_last_updated_fmt = '%Y-%m-%d %H:%M' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. html_use_index = False # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'MistralReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'MistralReleaseNotes.tex', u'Mistral Release Notes Documentation', u'Mistral Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'mistralreleasenotes', u'Mistral Release Notes Documentation', [u'Mistral Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'MistralReleaseNotes', u'Mistral Release Notes Documentation', u'Mistral Developers', 'MistralReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] # -- Options for openstackdocstheme ------------------------------------------- repository_name = 'openstack/mistral' bug_project = 'mistral' bug_tag = '' mistral-6.0.0/releasenotes/source/unreleased.rst0000666000175100017510000000016013245513262022070 0ustar zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: mistral-6.0.0/releasenotes/source/index.rst0000666000175100017510000000135013245513272021053 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ====================== Mistral Release Notes ====================== .. toctree:: :maxdepth: 1 unreleased pike ocata newton mitaka liberty mistral-6.0.0/releasenotes/source/mitaka.rst0000666000175100017510000000023213245513262021207 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka mistral-6.0.0/releasenotes/source/ocata.rst0000666000175100017510000000023013245513262021026 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata mistral-6.0.0/releasenotes/source/_templates/0000775000175100017510000000000013245513605021346 5ustar zuulzuul00000000000000mistral-6.0.0/releasenotes/source/_templates/.placeholder0000666000175100017510000000000013245513262023620 0ustar zuulzuul00000000000000mistral-6.0.0/doc/0000775000175100017510000000000013245513604013764 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/0000775000175100017510000000000013245513604015264 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/api/0000775000175100017510000000000013245513604016035 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/api/v2.rst0000666000175100017510000001510513245513261017121 0ustar zuulzuul00000000000000V2 API ====== This API describes the ways of interacting with Mistral service via HTTP protocol using Representational State Transfer concept (ReST). Basics ------- Media types ^^^^^^^^^^^ Currently this API relies on JSON to represent states of REST resources. Error states ^^^^^^^^^^^^ The common HTTP Response Status Codes (https://github.com/for-GET/know-your-http-well/blob/master/status-codes.md) are used. Application root [/] ^^^^^^^^^^^^^^^^^^^^ Application Root provides links to all possible API methods for Mistral. URLs for other resources described below are relative to Application Root. API v2 root [/v2/] ^^^^^^^^^^^^^^^^^^ All API v2 URLs are relative to API v2 root. Workbooks --------- .. autotype:: mistral.api.controllers.v2.resources.Workbook :members: `name` is immutable. tags is a list of values associated with a workbook that a user can use to group workbooks by some criteria (deployment workbooks, Big Data processing workbooks etc.). Note that name and tags get inferred from workbook definition when Mistral service receives a POST request. So they can't be changed in another way. .. autotype:: mistral.api.controllers.v2.resources.Workbooks :members: .. rest-controller:: mistral.api.controllers.v2.workbook:WorkbooksController :webprefix: /v2/workbooks Workflows --------- .. autotype:: mistral.api.controllers.v2.resources.Workflow :members: `name` is immutable. tags is a list of values associated with a workflow that a user can use to group workflows by some criteria. Note that name and tags get inferred from workflow definition when Mistral service receives a POST request. So they can't be changed in another way. .. autotype:: mistral.api.controllers.v2.resources.Workflows :members: .. rest-controller:: mistral.api.controllers.v2.workflow:WorkflowsController :webprefix: /v2/workflows Actions ------- .. autotype:: mistral.api.controllers.v2.resources.Action :members: .. autotype:: mistral.api.controllers.v2.resources.Actions :members: .. rest-controller:: mistral.api.controllers.v2.action:ActionsController :webprefix: /v2/actions Executions ---------- .. autotype:: mistral.api.controllers.v2.resources.Execution :members: .. autotype:: mistral.api.controllers.v2.resources.Executions :members: .. rest-controller:: mistral.api.controllers.v2.execution:ExecutionsController :webprefix: /v2/executions Tasks ----- When a workflow starts Mistral creates an execution. It in turn consists of a set of tasks. So Task is an instance of a task described in a Workflow that belongs to a particular execution. .. autotype:: mistral.api.controllers.v2.resources.Task :members: .. autotype:: mistral.api.controllers.v2.resources.Tasks :members: .. rest-controller:: mistral.api.controllers.v2.task:TasksController :webprefix: /v2/tasks .. rest-controller:: mistral.api.controllers.v2.task:ExecutionTasksController :webprefix: /v2/executions Action Executions ----------------- When a Task starts Mistral creates a set of Action Executions. So Action Execution is an instance of an action call described in a Workflow Task that belongs to a particular execution. .. autotype:: mistral.api.controllers.v2.resources.ActionExecution :members: .. autotype:: mistral.api.controllers.v2.resources.ActionExecutions :members: .. rest-controller:: mistral.api.controllers.v2.action_execution:ActionExecutionsController :webprefix: /v2/action_executions .. rest-controller:: mistral.api.controllers.v2.action_execution:TasksActionExecutionController :webprefix: /v2/tasks Cron Triggers ------------- Cron trigger is an object that allows to run Mistral workflows according to a time pattern (Unix crontab patterns format). Once a trigger is created it will run a specified workflow according to its properties: pattern, first_execution_time and remaining_executions. .. autotype:: mistral.api.controllers.v2.resources.CronTrigger :members: .. autotype:: mistral.api.controllers.v2.resources.CronTriggers :members: .. rest-controller:: mistral.api.controllers.v2.cron_trigger:CronTriggersController :webprefix: /v2/cron_triggers Environments ------------ Environment contains a set of variables which can be used in specific workflow. Using an Environment it is possible to create and map action default values - just provide '__actions' key in 'variables'. All these variables can be accessed using the Workflow Language with the ``<% $.__env %>`` expression. Example of usage: .. code-block:: yaml workflow: tasks: task1: action: std.echo output=<% $.__env.my_echo_output %> Example of creating action defaults :: ...ENV... "variables": { "__actions": { "std.echo": { "output": "my_output" } } }, ...ENV... Note: using CLI, Environment can be created via JSON or YAML file. .. autotype:: mistral.api.controllers.v2.resources.Environment :members: .. autotype:: mistral.api.controllers.v2.resources.Environments :members: .. rest-controller:: mistral.api.controllers.v2.environment:EnvironmentController :webprefix: /v2/environments Services -------- Through service management API, system administrator or operator can retrieve Mistral services information of the system, including service group and service identifier. The internal implementation of this feature make use of tooz library, which needs coordinator backend(the most commonly used at present is Zookeeper) installed, please refer to tooz official documentation for more detailed instruction. There are three service groups according to Mistral architecture currently, namely api_group, engine_group and executor_group. The service identifier contains name of the host that the service is running on and the process identifier of the service on that host. .. autotype:: mistral.api.controllers.v2.resources.Service :members: .. autotype:: mistral.api.controllers.v2.resources.Services :members: .. rest-controller:: mistral.api.controllers.v2.service:ServicesController :webprefix: /v2/services Validation ---------- Validation endpoints allow to check correctness of workbook, workflow and ad-hoc action Workflow Language without having to upload them into Mistral. **POST /v2/workbooks/validation** Validate workbook content (Workflow Language grammar and semantics). **POST /v2/workflows/validation** Validate workflow content (Workflow Language grammar and semantics). **POST /v2/actions/validation** Validate ad-hoc action content (Workflow Language grammar and semantics). These endpoints expect workbook, workflow or ad-hoc action text (Workflow Language) correspondingly in a request body. mistral-6.0.0/doc/source/api/index.rst0000666000175100017510000000012313245513261017673 0ustar zuulzuul00000000000000REST API Specification ====================== .. toctree:: :maxdepth: 2 v2 mistral-6.0.0/doc/source/configuration/0000775000175100017510000000000013245513604020133 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/configuration/config-guide.rst0000666000175100017510000001056313245513272023235 0ustar zuulzuul00000000000000Mistral Configuration Guide =========================== Mistral configuration is needed for getting it work correctly either with real OpenStack environment or without OpenStack environment. **NOTE:** The most of the following operations should performed in mistral directory. #. Generate *mistral.conf* (if it does not already exist):: $ oslo-config-generator \ --config-file tools/config/config-generator.mistral.conf \ --output-file /etc/mistral/mistral.conf #. Edit file **/etc/mistral/mistral.conf**. #. **If you are not using OpenStack, skip this item.** Provide valid keystone auth properties:: [keystone_authtoken] auth_uri = http://:5000/v3 identity_uri = http:// admin_password = admin_tenant_name = #. Mistral can be also configured to authenticate with Keycloak server via OpenID Connect protocol. In order to enable Keycloak authentication the following section should be in the config file:: auth_type = keycloak-oidc [keycloak_oidc] auth_url = https://:/auth Property 'auth_type' is assigned to 'keystone' by default. If SSL/TLS verification needs to be disabled then 'insecure = True' should also be added under [keycloak_oidc] group. #. If you want to configure SSL for Mistral API server, provide following options in config file:: [api] enable_ssl_api = True [ssl] ca_file = cert_file = key_file = #. **If you don't use OpenStack or you want to disable authentication for the Mistral service**, provide ``auth_enable = False`` in the config file:: [pecan] auth_enable = False #. **If you are not using OpenStack, skip this item**. Register Mistral service and Mistral endpoints on Keystone:: $ MISTRAL_URL="http://[host]:[port]/v2" $ openstack service create workflowv2 --name mistral \ --description 'OpenStack Workflow service' $ openstack endpoint create workflowv2 public $MISTRAL_URL $ openstack endpoint create workflowv2 internal $MISTRAL_URL $ openstack endpoint create workflowv2 admin $MISTRAL_URL #. Configure transport properties in the [DEFAULT] section:: [DEFAULT] transport_url = rabbit://:@:5672/ #. Configure database. **SQLite can't be used in production**. Use *MySQL* or *PostgreSQL* instead. Here are the steps how to connect *MySQL* DB to Mistral: Make sure you have installed **mysql-server** package on your database machine (it can be your Mistral machine as well). Install MySQL driver for python:: $ pip install mysql-python Create the database and grant privileges:: $ mysql -u root -p CREATE DATABASE mistral; USE mistral GRANT ALL ON mistral.* TO 'root':@; Configure connection in Mistral config:: [database] connection = mysql://:@:3306/mistral **NOTE**: If PostgreSQL is used, configure connection item as below:: connection = postgresql://:@:5432/mistral #. **If you are not using OpenStack, skip this item.** Update mistral/actions/openstack/mapping.json file which contains all allowed OpenStack actions, according to the specific client versions of OpenStack projects in your deployment. Please find more detailed information in tools/get_action_list.py script. #. Configure Task affinity feature if needed. It is needed for distinguishing either single task executor or one task executor from group of task executors:: [executor] host = my_favorite_executor Then, this executor can be referred in Workflow Language by .. code-block:: yaml ...Workflow YAML... my_task: ... target: my_favorite_executor ...Workflow YAML... #. Configure role based access policies for Mistral endpoints (policy.json):: [oslo_policy] policy_file = Default policy.json file is in ``mistral/etc/``. For more details see `policy.json file `_. #. After that try to run mistral engine and see it is running without any error:: $ mistral-server --config-file --server engine mistral-6.0.0/doc/source/configuration/policy-guide.rst0000666000175100017510000000052213245513261023257 0ustar zuulzuul00000000000000============================ Mistral Policy Configuration ============================ Configuration ~~~~~~~~~~~~~ The following is an overview of all available policies in Mistral. For a sample configuration file, refer to :doc:`samples/policy-yaml`. .. show-policy:: :config-file: ../../tools/config/policy-generator.mistral.conf mistral-6.0.0/doc/source/configuration/index.rst0000666000175100017510000000025613245513261022000 0ustar zuulzuul00000000000000Mistral Configuration and Policy Guide -------------------------------------- .. toctree:: :maxdepth: 2 config-guide.rst policy-guide.rst samples/index.rst mistral-6.0.0/doc/source/configuration/samples/0000775000175100017510000000000013245513604021577 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/configuration/samples/index.rst0000666000175100017510000000043213245513261023440 0ustar zuulzuul00000000000000========================== Sample configuration files ========================== Configuration files can alter how Mistral behaves at runtime and by default are located in ``/etc/mistral/``. Links to sample configuration files can be found below: .. toctree:: policy-yaml.rst mistral-6.0.0/doc/source/configuration/samples/policy-yaml.rst0000666000175100017510000000031213245513261024565 0ustar zuulzuul00000000000000=========== policy.yaml =========== Use the ``policy.yaml`` file to define additional access controls that apply to the Mistral services: .. literalinclude:: ../../_static/mistral.policy.yaml.sample mistral-6.0.0/doc/source/user/0000775000175100017510000000000013245513604016242 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/user/wf_lang_v2.rst0000666000175100017510000014403213245513272021027 0ustar zuulzuul00000000000000Mistral Workflow Language v2 specification ========================================== Introduction ------------ This document fully describes Mistral Workflow Language version 2 of Mistral Workflow Service. Since version 1 issued in May 2014 Mistral team completely reworked the language pursuing the goal in mind to make it easier to understand while more consistent and flexible. Unlike Mistral Workflow Language v1, v2 assumes that all entities that Mistral works with like workflows and actions are completely independent in terms of how they're referenced and accessed through API (and also Python Client API and CLI). Workbook, the entity that can combine workflows and actions still exists in the language but only for namespacing and convenience purposes. See `Workbooks section <#workbooks>`__ for more details. **NOTE**: Mistral Workflow Language and API of version 1 has not been supported since April 2015 and version 2 is now the only way to interact with Mistral service. Mistral Workflow Language consists of the following main object(entity) types that will be described in details below: - `Workflows <#workflows>`__ - `Actions <#actions>`__ Prerequisites ------------- Mistral Workflow Language supports `YAQL `__ and `Jinja2 `__ expression languages to reference workflow context variables and thereby implements passing data between workflow tasks. It's also referred to as Data Flow mechanism. YAQL is a simple but powerful query language that allows to extract needed information from JSON structured data. Although Jinja2 is primarily a templating technology, Mistral also uses it for evaluating expression so user have a choice between YAQL and Jinja2. It's also possible to combine both expression languages within one workflow definition. The only limitation is that it's impossible to use both types of expression within one line. As long as there are YAQL and Jinja2 expressions on different lines of the workflow definition text, it is valid. It is allowed to use YAQL/Jinja2 in the following sections of Mistral Workflow Language: - Workflow `'output' attribute <#common-workflow-attributes>`__ - Workflow `'task-defaults' attribute <#common-workflow-attributes>`__ - `Direct workflow <#direct-workflow>`__ transitions - Task `'publish' attribute <#common-task-attributes>`__ - Task `'input' attribute <#common-task-attributes>`__ - Task `'with-items' attribute <#common-task-attributes>`__ - Task `'target' attribute <#common-task-attributes>`__ - Any attribute of `task policies <#policies>`__ - Action `'base-input' attribute <#attributes>`__ - Action `'output' attribute <#attributes>`__ Mistral Workflow Language is fully based on YAML and knowledge of YAML is a plus for better understanding of the material in this specification. It also takes advantage of supported query languages to define expressions in workflow and action definitions. - Yet Another Markup Language (YAML): http://yaml.org - Yet Another Query Language (YAQL): https://pypi.python.org/pypi/yaql/1.0.0 - Jinja 2: http://jinja.pocoo.org/docs/dev/ Workflows --------- Workflow is the main building block of Mistral Workflow Language, the reason why the project exists. Workflow represents a process that can be described in a various number of ways and that can do some job interesting to the end user. Each workflow consists of tasks (at least one) describing what exact steps should be made during workflow execution. YAML example ^^^^^^^^^^^^ .. code-block:: mistral --- version: '2.0' create_vm:   description: Simple workflow example   type: direct input:     - vm_name     - image_ref     - flavor_ref   output:     vm_id: <% $.vm_id %> tasks:     create_server:       action: nova.servers_create name=<% $.vm_name %> image=<% $.image_ref %> flavor=<% $.flavor_ref %>       publish:         vm_id: <% task(create_server).result.id %>       on-success:         - wait_for_instance     wait_for_instance:       action: nova.servers_find id=<% $.vm_id %> status='ACTIVE'       retry:         delay: 5         count: 15 This example workflow simply sends a command to OpenStack Compute service Nova to start creating a virtual machine and wait till it's created using special "retry" policy. Workflow types ^^^^^^^^^^^^^^ Mistral Workflow Language v2 introduces different workflow types and the structure of each workflow type varies according to its semantics. Basically, workflow type encapsulates workflow processing logic, a set of meta rules defining how all workflows of this type should work. Currently, Mistral provides two workflow types: - `Direct workflow <#direct-workflow>`__ - `Reverse workflow <#reverse-workflow>`__ See corresponding sections for details. Common workflow attributes ^^^^^^^^^^^^^^^^^^^^^^^^^^ - **type** - Workflow type. Either 'direct' or 'reverse'. *Optional*. 'direct' by default. - **description** - Arbitrary text containing workflow description. *Optional*. - **input** - List defining required input parameter names and optionally their default values in a form "my_param: 123". *Optional*. - **output** - Any data structure arbitrarily containing YAQL/Jinja2 expressions that defines workflow output. May be nested. *Optional*. - **output-on-error** - Any data structure arbitrarily containing YAQL/Jinja2 expressions that defines output of a workflow to be returned if it goes into error state. May be nested. *Optional*. - **task-defaults** - Default settings for some of task attributes defined at workflow level. *Optional*. Corresponding attribute defined for a specific task always takes precedence. Specific task attributes that could be defined in **task-defaults** are the following: - **on-error** - List of tasks which will run after the task has completed with an error. For `direct workflow <#direct-workflow>`__ only. *Optional*. - **on-success** - List of tasks which will run after the task has completed successfully. For `direct workflow <#direct-workflow>`__ only. *Optional*. - **on-complete** - List of tasks which will run after the task has completed regardless of whether it is successful or not. For `direct workflow <#direct-workflow>`__ only. *Optional*. - **requires** - List of tasks that a task depends on. For `reverse workflow <#Reverse_Workflow>`__ only. *Optional*. - **pause-before** - Configures pause-before policy. *Optional*. - **wait-before** - Configures wait-before policy. *Optional*. - **wait-after** - Configures wait-after policy. *Optional*. - **timeout** - Configures timeout policy. *Optional*. - **retry** - Configures retry policy. *Optional*. - **concurrency** - Configures concurrency policy. *Optional*. - **tasks** - Dictionary containing workflow tasks. See below for more details. *Required*. Tasks ^^^^^ Task is what a workflow consists of. It defines a specific computational step in the workflow. Each task can optionally take input data and produce output. In Mistral Workflow Language v2, task can be associated with an action or a workflow. In the example below there are two tasks of different types: .. code-block:: mistral action_based_task:   action: std.http url='openstack.org' workflow_based_task:   workflow: backup_vm_workflow vm_id=<% $.vm_id %> Actions will be explained below in an individual paragraph but looking ahead it's worth saying that Mistral provides a lot of actions out of the box (including actions for most of the core OpenStack services) and it's also easy to plug new actions into Mistral. Common task attributes '''''''''''''''''''''' All Mistral tasks, regardless of workflow type, have the following common attributes: - **description** - Arbitrary text containing task description. *Optional*. - **action** - Name of the action associated with the task. Can be a static value or an expression (for example, "{{ _.action_name }}"). *Mutually exclusive with* **workflow**. If neither action nor workflow are provided then the action 'std.noop' will be used that does nothing. - **workflow** - Name of the workflow associated with the task. Can be a static value or an expression (for example, "{{ _.subworkflow_name }}"). *Mutually exclusive with* **action**. - **input** - Actual input parameter values of the task's action or workflow. *Optional*. Value of each parameter is a JSON-compliant type such as number, string etc, dictionary or list. It can also be a YAQL/Jinja2 expression to retrieve value from task context or any of the mentioned types containing inline expressions (for example, string "<% $.movie_name %> is a cool movie!") Can be an expression that evaluates to a JSON object. - **publish** - Dictionary of variables to publish to the workflow context. Any JSON-compatible data structure optionally containing expression to select precisely what needs to be published. Published variables will be accessible for downstream tasks via using expressions. *Optional*. - **publish-on-error** - Same as **publish** but evaluated in case of task execution failures. *Optional* - **with-items** - If configured, it allows to run action or workflow associated with a task multiple times on a provided list of items. See `Processing collections using 'with-items' <#processing-collections>`__ for details. *Optional*. - **keep-result** - Boolean value allowing to not store action results after task completion (e.g. if they are large and not needed afterwards). *Optional*. By default is 'true'. - **target** - String parameter. It defines an executor to which task action should be sent to. Target here physically means a name of executors group but task will be run only on one of them. *Optional*. - **pause-before** - Configures pause-before policy. *Optional*. - **wait-before** - Configures wait-before policy. *Optional*. - **wait-after** - Configures wait-after policy. *Optional*. - **timeout** - Configures timeout policy. *Optional*. - **retry** - Configures retry policy. *Optional*. - **concurrency** - Configures concurrency policy. *Optional*. - **safe-rerun** - Boolean value allowing to rerun task if executor dies during action execution. If set to 'true' task may be run twice. *Optional*. By default set to 'false'. workflow '''''''' If a task has the attribute 'workflow' it synchronously starts a sub-workflow with the given name. Example of a static sub-workflow name: .. code-block:: mistral my_task:   workflow: name_of_my_workflow Example of a dynamic sub-workflow name: .. code-block:: mistral --- version: '2.0' framework: input: - magic_workflow_name: show_weather tasks: weather_data: action: std.echo input: output: location: wherever temperature: "22C" publish: weather_data: <% task().result %> on-success: - do_magic do_magic: # Reference workflow by parameter. workflow: <% $.magic_workflow_name %> # Expand dictionary to input parameters. input: <% $.weather_data %> show_weather: input: - location - temperature tasks: write_data: action: std.echo input: output: "<% $.location %>: <% $.temperature %>" In this example, we defined two workflows in one YAML snippet and the workflow 'framework' may call the workflow 'show_weather' if 'framework' receives the corresponding workflow name through the input parameter 'magic_workflow_name'. In this case it is set by default so a user doesn't need to pass anything explicitly. Note: Typical use for the dynamic sub-workflow selection is when parts of a workflow can be customized. E.g. collect some weather data and then execute some custom workflow on it. Policies '''''''' Any Mistral task regardless of its workflow type can optionally have configured policies. YAML example .. code-block:: mistral my_task:   action: my_action   pause-before: true   wait-before: 2   wait-after: 4   timeout: 30   retry:     count: 10     delay: 20     break-on: <% $.my_var = true %> continue-on: <% $.my_var = false %> **pause-before** Defines whether Mistral Engine should put the workflow on hold or not before starting a task. **wait-before** Defines a delay in seconds that Mistral Engine should wait before starting a task. **wait-after** Defines a delay in seconds that Mistral Engine should wait after a task has completed before starting next tasks defined in *on-success*, *on-error* or *on-complete*. **timeout** Defines a period of time in seconds after which a task will be failed automatically by engine if it hasn't completed. **concurrency** Defines a max number of actions running simultaneously in a task. *Applicable* only for tasks that have *with-items*. If *concurrency* task property is not set then actions (or workflows in case of nested workflows) of the task will be scheduled for execution all at once. **retry** Defines a pattern how task should be repeated in case of an error. - **count** - Defines a maximum number of times that a task can be repeated. - **delay** - Defines a delay in seconds between subsequent task iterations. - **break-on** - Defines an expression that will break iteration loop if it evaluates to 'true'. If it fires then the task is considered error. - **continue-on** - Defines an expression that will continue iteration loop if it evaluates to 'true'. If it fires then the task is considered successful. If it evaluates to 'false' then policy will break the iteration. Retry policy can also be configured on a single line as: .. code-block:: mistral task1:   action: my_action   retry: count=10 delay=5 break-on=<% $.foo = 'bar' %> All parameter values for any policy can be defined as YAQL/Jinja2 expressions. Input syntax '''''''''''' When describing a workflow task it's possible to specify its input parameters in two ways: Full syntax: .. code-block:: mistral my_task:   action: std.http   input: url: http://mywebsite.org     method: GET Simplified syntax: .. code-block:: mistral my_task:   action: std.http url="http://mywebsite.org" method="GET" Syntax with dynamic input parameter map: .. code-block:: mistral --- version: '2.0' example_workflow: input: - http_request_parameters: url: http://mywebsite.org method: GET tasks: setup_task: action: std.http input: <% $.http_request_parameters %> The same rules apply to tasks associated with workflows. Full syntax: .. code-block:: mistral my_task:   workflow: some_nested_workflow   input:     param1: val1     param2: val2 Simplified syntax: .. code-block:: mistral my_task:   workflow: some_nested_workflow param1='val1' param2='val2' Syntax with dynamic input parameter map: .. code-block:: mistral --- version: '2.0' example_workflow: input: - nested_params: {"param1": "val1", "param2": "val2"} tasks: setup_task: workflow: some_nested_workflow input: <% $.nested_params %> **NOTE**: It's also possible to merge these two approaches and specify a part of parameters using simplified key-value pairs syntax and using keyword *input*. In this case all the parameters will be effectively merged. If the same parameter is specified in both ways then the one under *input* keyword takes precedence. Direct workflow ^^^^^^^^^^^^^^^ Direct workflow consists of tasks combined in a graph where every next task starts after another one depending on produced result. So direct workflow has a notion of transition. Direct workflow is considered to be completed if there aren't any transitions left that could be used to jump to next tasks. .. image:: /img/Mistral_direct_workflow.png Figure 1. Mistral Direct Workflow. YAML example '''''''''''' .. code-block:: mistral --- version: '2.0' create_vm_and_send_email:  type: direct  input:    - vm_name    - image_id    - flavor_id  output:    result: <% $.vm_id %>  tasks:    create_vm:      action: nova.servers_create name=<% $.vm_name %> image=<% $.image_id %> flavor=<% $.flavor_id %>      publish:        vm_id: <% task(create_vm).result.id %>      on-error:        - send_error_email      on-success:        - send_success_email    send_error_email:      action: send_email to_addrs=['admin@mysite.org'] body='Failed to create a VM'      on-complete:        - fail    send_success_email:      action: send_email to_addrs=['admin@mysite.org'] body='Vm is successfully created and its id <% $.vm_id %>' Direct workflow task attributes ''''''''''''''''''''''''''''''' - **on-success** - List of tasks which will run after the task has completed successfully. *Optional*. - **on-error** - List of tasks which will run after the task has completed with an error. *Optional*. - **on-complete** - List of tasks which will run after the task has completed regardless of whether it is successful or not. *Optional*. Note: All of the above clauses cannot contain task names evaluated as YAQL/Jinja expressions. They have to be static values. However, task transitions can be conditional, based on expressions. See `Transitions with expressions <#transitions-with-expressions>`__ for more details. It is important to understand the semantics of **on-success**, **on-error** and **on-complete** around handling action errors. In case if task action returned an error **on-success** and **on-complete** won't prevent from failing the entire workflow execution. Only **on-error** will. The closest analogy is *try-catch-finally* blocks in regular programming languages. **on-error** is similar to *catch* and it serves as an exception handler for possible errors expected by design. Whereas **on-complete** is like *finally* that will run in any case but it won't stop the exception from bubbling up to an upper layer. So **on-complete** should only be understood as a language construction that allows to define some clean up actions. Transitions with expressions '''''''''''''''''''''''''''' Task transitions can be determined by success/error/completeness of the previous tasks and also by additional guard expressions that can access any data produced by upstream tasks and as workflow input. So in the example above task 'create_vm' could also have a YAQL expression on transition to task 'send_success_email' as follows: .. code-block:: mistral create_vm:  ...  on-success:    - send_success_email: <% $.vm_id != null %> And this would tell Mistral to run 'send_success_email' task only if 'vm_id' variable published by task 'create_vm' is not empty. Expressions can also be applied to 'on-error' and 'on-complete'. Engine Commands ''''''''''''''' Mistral has a number of engine commands that can be called within direct workflows. These commands are used to change the Workflow state. - **succeed** - will end the current workflow with the state SUCCESS. - **pause** - will end the current workflow with the state PAUSED. - **fail** - will end the current workflow with the state ERROR. Each of the engine commands accepts a ``msg`` input. This is optional, but if provided will be stored in the state info on the workflow execution. Workflows that have been ended with ``succeed`` or ``fail`` may not be resumed later, but workflows that have been ended with ``pause`` may be. YAML example '''''''''''' .. code-block:: mistral --- version: '2.0' send_error_mail: tasks: create_server: action: nova.servers_create name=<% $.vm_name %> publish: vm_id: <% task().result.id %> on-complete: - fail: <% not $.vm_id %> In this example we have a short workflow with one task that creates a server in Nova. The task publishes the ID of the virtual machine, but if this value is empty then it will fail the workflow. .. code-block:: mistral on-complete: - taskA - fail - taskB When the engine commands are used with task names in a single list, they are processed one at a time until the workflow reaches a terminal state. In the above example, the ``on-complete`` has three steps to complete - these are executed in order until the workflow reaches a terminal state. So in this case ``taskA`` is called first, then the ``fail`` engine command and ``taskB`` would never be called. ``taskB`` would not be called if ``succeed`` was used in this example either, but if ``pause`` was used ``taskB`` would be called after the workflow is resumed. Fork '''' There are situations when we need to be able to run more than one task after some task has completed. .. code-block:: mistral create_vm:   ...   on-success:     - register_vm_in_load_balancer     - register_vm_in_dns In this case Mistral will run both "register_xxx" tasks simultaneously and this will lead to multiple independent workflow routes being processed in parallel. Join '''' Join flow control allows to synchronize multiple parallel workflow branches and aggregate their data. Full Join (join: all) .. code-block:: mistral register_vm_in_load_balancer:   ...   on-success:     - wait_for_all_registrations register_vm_in_dns:  ...  on-success:    - wait_for_all_registrations try_to_do_something_without_registration:  ...  on-error:    - wait_for_all_registrations wait_for_all_registrations:   join: all   action: send_email When a task has property "join" assigned with value "all" the task will run only if all upstream tasks (ones that lead to this task) are completed and corresponding conditions have triggered. Task A is considered an upstream task of Task B if Task A has Task B mentioned in any of its "on-success", "on-error" and "on-complete" clauses regardless of guard expressions. Partial Join (join: 2) .. code-block:: mistral register_vm_in_load_balancer:  ...  on-success:    - wait_for_all_registrations register_vm_in_dns:  ...  on-success:    - wait_for_all_registrations register_vm_in_zabbix:   ...   on-success:     - wait_for_all_registrations wait_for_two_registrations:   join: 2   action: send_email When a task has property "join" assigned with a numeric value then the task will run once at least this number of upstream tasks are completed and corresponding conditions have triggered. In the example above task "wait_for_two_registrations" will run if two any of "register_vm_xxx" tasks complete. Discriminator (join: one) Discriminator is a special case of Partial Join when "join" property has value 1. It means Mistral will wait for any completed task. In this case instead of 1 it is possible to specify special string value "one" which is introduced for symmetry with "all". However, it's up to the user whether to use "1" or "one". Reverse workflow ^^^^^^^^^^^^^^^^ In reverse workflow all relationships in workflow task graph are dependencies. In order to run this type of workflow we need to specify a task that needs to be completed, it can be conventionally called 'target task'. When Mistral Engine starts a workflow it recursively identifies all the dependencies that need to be completed first. .. image:: /img/Mistral_reverse_workflow.png Figure 2 explains how reverse workflow works. In the example, task **T1** is chosen a target task. So when the workflow starts Mistral will run only tasks **T7**, **T8**, **T5**, **T6**, **T2** and **T1** in the specified order (starting from tasks that have no dependencies). Tasks **T3** and **T4** won't be a part of this workflow because there's no route in the directed graph from **T1** to **T3** or **T4**. YAML example '''''''''''' .. code-block:: mistral --- version: '2.0' create_vm_and_send_email:  type: reverse  input:    - vm_name    - image_id    - flavor_id  output:    result: <% $.vm_id %>  tasks:    create_vm:      action: nova.servers_create name=<% $.vm_name %> image=<% $.image_id %> flavor=<% $.flavor_id %>      publish:        vm_id: <% task(create_vm).result.id %>    search_for_ip:      action: nova.floating_ips_findall instance_id=null      publish:        vm_ip: <% task(search_for_ip).result[0].ip %>    associate_ip:      action: nova.servers_add_floating_ip server=<% $.vm_id %> address=<% $.vm_ip %>      requires: [search_for_ip]    send_email:      action: send_email to='admin@mysite.org' body='Vm is created and id <% $.vm_id %> and ip address <% $.vm_ip %>'      requires: [create_vm, associate_ip] Reverse workflow task attributes '''''''''''''''''''''''''''''''' - **requires** - List of tasks which should be executed before this task. *Optional*. Processing collections ^^^^^^^^^^^^^^^^^^^^^^ YAML example '''''''''''' .. code-block:: mistral --- version: '2.0' create_vms:  description: Creating multiple virtual servers using "with-items".  input:    - vm_names    - image_ref    - flavor_ref  output:    vm_ids: <% $.vm_ids %>  tasks:    create_servers:      with-items: vm_name in <% $.vm_names %>      action: nova.servers_create name=<% $.vm_name %> image=<% $.image_ref %> flavor=<% $.flavor_ref %>      publish:        vm_ids: <% task(create_servers).result.id %>      on-success:        - wait_for_servers    wait_for_servers:      with-items: vm_id in <% $.vm_ids %>      action: nova.servers_find id=<% $.vm_id %> status='ACTIVE'      retry:        delay: 5        count: <% $.vm_names.len() * 10 %> Workflow "create_vms" in this example creates as many virtual servers as we provide in "vm_names" input parameter. E.g., if we specify vm_names=["vm1", "vm2"] then it'll create servers with these names based on same image and flavor. It is possible because of using "with-items" keyword that makes an action or a workflow associated with a task run multiple times. Value of "with-items" task property contains an expression in the form: 'my_var' in <% YAQL_expression %>. Similar for Jinja2 expression: 'my_var' in {{ Jinja2_expression }}. The most common form is: .. code-block:: mistral with-items:   - var1 in <% YAQL_expression_1 %> # or: var1 in <% Jinja2_expression_1 %>   - var2 in <% YAQL_expression_2 %> # or: var2 in <% Jinja2_expression_2 %>   ...   - varN in <% YAQL_expression_N %> # or: varN in <% Jinja2_expression_N %> where collections expressed as YAQL_expression_1, YAQL_expression_2, YAQL_expression_N must have equal sizes. When a task gets started Mistral will iterate over all collections in parallel, i.e. number of iterations will be equal to length of any collections. Note that in case of using "with-items" task result accessible in workflow context as <% task(task_name).result %> will be a list containing results of corresponding action/workflow calls. If at least one action/workflow call has failed then the whole task will get into ERROR state. It's also possible to apply retry policy for tasks with "with-items" property. In this case retry policy will be relaunching all action/workflow calls according to "with-items" configuration. Other policies can also be used the same way as with regular non "with-items" tasks. .. _actions-dsl: Actions ------- Action defines what exactly needs to be done when task starts. Action is similar to a regular function in general purpose programming language like Python. It has a name and parameters. Mistral distinguishes 'system actions' and 'Ad-hoc actions'. System actions ^^^^^^^^^^^^^^ System actions are provided by Mistral out of the box and can be used by anyone. It is also possible to add system actions for specific Mistral installation via a special plugin mechanism. Currently, built-in system actions are: std.fail '''''''' Fail the current workflow. This action can be used to manually set the workflow state to error. Example: .. code-block:: mistral manual_fail: action: std.fail std.http '''''''' Sends an HTTP request. Input parameters: - **url** - URL for the HTTP request. *Required*. - **method** - method for the HTTP request. *Optional*. Default is 'GET'. - **params** - Dictionary or bytes to be sent in the query string for the HTTP request. *Optional*. - **body** - Dictionary, bytes, or file-like object to send in the body of the HTTP request. *Optional*. - **headers** - Dictionary of HTTP Headers to send with the HTTP request. *Optional*. - **cookies** - Dictionary of HTTP Cookies to send with the HTTP request. *Optional*. - **auth** - Auth to enable Basic/Digest/Custom HTTP Auth. *Optional*. - **timeout** - Float describing the timeout of the request in seconds. *Optional*. - **allow_redirects** - Boolean. Set to True if POST/PUT/DELETE redirect following is allowed. *Optional*. - **proxies** - Dictionary mapping protocol to the URL of the proxy. *Optional*. - **verify** - Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use. *Optional*. Default is 'True'. Example: .. code-block:: mistral http_task:   action: std.http url='google.com' std.mistral_http '''''''''''''''' This action works just like 'std.http' with the only exception: when sending a request it inserts the following HTTP headers: - **Mistral-Workflow-Name** - Name of the workflow that the current action execution is associated with. - **Mistral-Execution-Id** - Identifier of the workflow execution this action is associated with. - **Mistral-Task-Id** - Identifier of the task execution this action execution is associated with. - **Mistral-Action-Execution-Id** - Identifier of the current action execution. Using this action makes it possible to do any work in asynchronous manner triggered via HTTP protocol. That means that Mistral can send a request using 'std.mistral_http' and then any time later whatever system that received this request can notify Mistral back (using its public API) with the result of this action. Header **Mistral-Action-Execution-Id** is required for this operation because it is used a key to find corresponding action execution in Mistral to attach the result to. std.email ''''''''' Sends an email message via SMTP protocol. - **to_addrs** - Comma separated list of recipients. *Required*. - **subject** - Subject of the message. *Optional*. - **body** - Text containing message body. *Optional*. - **from_addr** - Sender email address. *Required*. - **smtp_server** - SMTP server host name. *Required*. - **smtp_password** - SMTP server password. *Required*. Example: .. code-block:: mistral send_email_task:   action: std.email   input:       to_addrs: [admin@mywebsite.org]       subject: Hello from Mistral :)       body: |         Cheers! (:_:)         -- Thanks, Mistral Team.       from_addr: mistral@openstack.org       smtp_server: smtp.google.com       smtp_password: SECRET The syntax of 'std.emal' action is pretty verbose. However, it can be significantly simplified using Ad-hoc actions. More about them `below <#ad-hoc-actions>`__. std.ssh ''''''' Runs Secure Shell command. Input parameters: - **cmd** - String containing a shell command that needs to be executed. *Required*. - **host** - Host name that the command needs to be executed on. *Required*. - **username** - User name to authenticate on the host. *Required*. - **password** - User password to to authenticate on the host. *Optional*. - **private_key_filename** - Private key file name which will be used for authentication on remote host. All private keys should be on executor host in **/.ssh/**. **** should refer to user directory under which service is running. *Optional*. **NOTE**: Authentication using key pairs is supported, key should be on Mistral Executor server machine. std.echo '''''''' Simple action mostly needed for testing purposes that returns a predefined result. Input parameters: - **output** - Value of any type that needs to be returned as a result of the action. *Required*. std.javascript '''''''''''''' Evaluates given JavaScript code. **NOTE**: std.js is an alias for std.javascript i.e, std.js can be used in place of std.javascript. Input parameters: - **script** - The text of JavaScript snippet that needs to be executed. *Required*. **To use std.javascript, it is needed to install a number of dependencies and JS engine.** Currently Mistral uses only V8 Engine and its wrapper - PyV8. For installing it, do the next steps: 1. Install required libraries - boost, g++, libtool, autoconf, subversion, libv8-legacy-dev: On Ubuntu:: $ sudo apt-get install libboost-all-dev g++ libtool autoconf libv8-legacy-dev subversion make 2. Checkout last version of PyV8:: $ svn checkout http://pyv8.googlecode.com/svn/trunk/ pyv8 $ cd pyv8 3. Build PyV8 - it will checkout last V8 trunk, build it, and then build PyV8:: $ sudo python setup.py build 4. Install PyV8:: $ sudo python setup.py install Example: .. code-block:: mistral --- version: '2.0' generate_uuid:   description: Generates a Universal Unique ID   type: direct   input:     - radix: 16   output:     uuid: <% $.generated_uuid %>   tasks:     generate_uuid_task:       action: std.javascript       input:         context: <% $ %>         script: |           return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {                   var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r&0x3|0x8);                   return v.toString($.radix);           });       publish:         generated_uuid: <% task(generate_uuid_task).result %> Another example for getting the current date and time: .. code-block:: mistral   ---   version: '2.0'   get_date_workflow:     description: Get the current date     type: direct     output:       current_date: <% $.current_date %>     tasks:       get_date_task:         action: std.javascript         input:           context: <% $ %>           script: |             var date = new Date();             return date; # returns "2015-07-12T10:32:12.460000" or use date.toLocaleDateString() for "Sunday, July 12, 2015"         publish:           current_date: <% task(get_date_task).result %> Ad-hoc actions ^^^^^^^^^^^^^^ Ad-hoc action is a special type of action that can be created by user. Ad-hoc action is always created as a wrapper around any other existing system action and its main goal is to simplify using same actions many times with similar pattern. YAML example '''''''''''' .. code-block:: mistral --- version: '2.0' error_email: input:    - execution_id base: std.email base-input:    to_addrs: ['admin@mywebsite.org']    subject: 'Something went wrong with your Mistral workflow :('    body: |        Please take a look at Mistral Dashboard to find out what's wrong        with your workflow execution <% $.execution_id %>.        Everything's going to be alright!        -- Sincerely, Mistral Team.    from_addr: 'mistral@openstack.org'    smtp_server: 'smtp.google.com'    smtp_password: 'SECRET' Once this action is uploaded to Mistral any workflow will be able to use it as follows: .. code-block:: mistral my_workflow:  tasks:     ...     send_error_email:       action: error_email execution_id=<% execution().id %> Attributes '''''''''' - **base** - Name of base action that this action is built on top of. *Required*. - **base-input** - Actual input parameters provided to base action. Look at the example above. *Optional*. - **input** - List of declared action parameters which should be specified as corresponding task input. This attribute is optional and used only for documenting purposes. Mistral now does not enforce actual input parameters to exactly correspond to this list. Base parameters will be calculated based on provided actual parameters with using expressions so what's used in expressions implicitly define real input parameters. Dictionary of actual input parameters (expression context) is referenced as '$.' in YAQL and as '_.' in Jinja. Redundant parameters will be simply ignored. - **output** - Any data structure defining how to calculate output of this action based on output of base action. It can optionally have expressions to access properties of base action output through expression context. Workbooks --------- As mentioned before, workbooks still exist in Mistral Workflow Language version 2 but purely for convenience. Using workbooks users can combine multiple entities of any type (workflows, actions and triggers) into one document and upload to Mistral service. When uploading a workbook Mistral will parse it and save its workflows, actions and triggers as independent objects which will be accessible via their own API endpoints (/workflows, /actions and /triggers/). Once it's done the workbook comes out of the game. User can just start workflows and use references to workflows/actions/triggers as if they were uploaded without workbook in the first place. However, if we want to modify these individual objects we can modify the same workbook definition and re-upload it to Mistral (or, of course, we can do it independently). Namespacing ^^^^^^^^^^^ One thing that's worth noting is that when using a workbook Mistral uses its name as a prefix for generating final names of workflows, actions and triggers included into the workbook. To illustrate this principle let's take a look at the figure below. .. image:: /img/Mistral_workbook_namespacing.png So after a workbook has been uploaded its workflows and actions become independent objects but with slightly different names. YAML example '''''''''''' .. code-block:: mistral --- version: '2.0' name: my_workbook description: My set of workflows and ad-hoc actions workflows:  local_workflow1:    type: direct    tasks:      task1:        action: local_action str1='Hi' str2=' Mistral!'        on-complete:          - task2    task2:       action: global_action       ...   local_workflow2:     type: reverse     tasks:       task1:         workflow: local_workflow1       task2:         workflow: global_workflow param1='val1' param2='val2'         requires: [task1]         ... actions:  local_action:    input:      - str1      - str2    base: std.echo output="<% $.str1 %><% $.str2 %>" **NOTE**: Even though names of objects inside workbooks change upon uploading Mistral allows referencing between those objects using local names declared in the original workbook. Attributes ^^^^^^^^^^ - **name** - Workbook name. *Required*. - **description** - Workbook description. *Optional*. - **tags** - String with arbitrary comma-separated values. **Optional**. - **workflows** - Dictionary containing workflow definitions. *Optional*. - **actions** - Dictionary containing ad-hoc action definitions. *Optional*. Predefined values/Functions in execution data context ----------------------------------------------------- Using expressions it is possible to use some predefined values in Mistral Workflow Language. - **OpenStack context** - **Task result** - **Execution info** - **Environment** OpenStack context ^^^^^^^^^^^^^^^^^ OpenStack context is available by **$.openstack**. It contains **auth_token**, **project_id**, **user_id**, **service_catalog**, **user_name**, **project_name**, **roles**, **is_admin** properties. Builtin functions in expressions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In addition to the current context (i.e. $ in YAQL and _ in Jinja2) expressions have access to a set of predefined functions. The expression languages come with their own individual included functions and operations. Mistral adds the following functions that are available in all the supported languages. This section will describe builtin functions added by Mistral. Tasks function '''''''''''''' Signature: **tasks(workflow_execution_id=null, recursive=false, state=null, flat=false)** Description: This function allows users to filter all tasks by workflow execution id and/or state. In addition, it is possible to get task executions recursively and flatten the task executions list. Parameters: #. **workflow_execution_id** - If provided the tasks function will return task executions for a specific workflow execution (either the current execution or a different one). Otherwise it will return all task executions that match the other parameters. *Optional.* #. **recursive** - This parameter is a boolean value, if it is true then all task executions within nested workflow executions will be returned. This is usually used in combination with a specific workflow_execution_id where you still want to see nested workflow's task executions. *Optional.* False by default. #. **state** - If provided, the task executions will be filtered by their current state. If state isn't provided, all task executions that match the other parameters will be returned . *Optional.* #. **flat** - if true, only list the task executions that match at least one of the next conditions: * task executions of type action * task executions of type workflow that have a different state from the workflow execution they triggered. For example, if used with a specific workflow_execution_id and the state ERROR it will return tasks that erred despite the workflow succeeding. This can mean that there was an error in the task itself, like an invalid expression in publish. *Optional.* False by default. Example: Workflow definition: .. code-block:: mistral --- version: "v2.0" wf: tasks: task: action: std.noop publish: all_tasks_in_this_wf_yaql: <% tasks(execution().id) %> all_tasks_in_this_wf_jinja: "{{ tasks(execution().id) }}" all_tasks_in_error_yaql: <% tasks(null, false, ERROR) %> all_tasks_in_error_jinja: "{{ tasks(None, false, 'ERROR') }}" all_tasks_in_error_yaql_with_kw: <% tasks(state => ERROR) %> all_tasks_in_error_jinja_with_kw: "{{ tasks(state='ERROR') }}" all_tasks_yaql_option1: <% tasks() %> all_tasks_yaql_option2: <% tasks(null, false, null, false) %> all_tasks_jinja_option1: "{{ tasks() }}" all_tasks_jinja_option2: "{{ tasks(None, false, None, false) }}" Task publish result (partial to keep the documentation short): .. warning:: The return value for each task execution hasn't been finalized and isn't considered stable. It may change in a future Mistral release. .. code-block:: json { "all_tasks_in_error_yaql": [ { "id": "3d363d4b-8c19-48fa-a9a0-8721dc5469f2", "name": "fail_task", "type": "ACTION", "workflow_execution_id": "c0a4d2ff-0127-4826-8370-0570ef8cad80", "state": "ERROR", "state_info": "Failed to run action [action_ex_id=bcb04b28-6d50-458e-9b7e-a45a5ff1ca01, action_cls='', attributes='{}', params='{}']\n Fail action expected exception.", "result": "Failed to run action [action_ex_id=bcb04b28-6d50-458e-9b7e-a45a5ff1ca01, action_cls='', attributes='{}', params='{}']\n Fail action expected exception.", "published": {}, "spec": { "action": "std.fail", "version": "2.0", "type": "direct", "name": "fail_task" } } ], "all_tasks_in_this_wf_jinja": [ { "id": "83a34bfe-268c-46f5-9e5c-c16900540084", "name": "task", "type": "ACTION", "workflow_execution_id": "899a3318-b5c0-4860-82b4-a5bd147a4643", "state": "SUCCESS", "state_info": null, "result": null, "published": {}, "spec": { "action": "std.noop", "version": "2.0", "type": "direct", "name": "task", "publish": { "all_tasks_in_error_yaql": "<% tasks(null, false, ERROR) %>", "all_tasks_in_error_jinja": "{{ tasks(None, false, 'ERROR') }}", "all_tasks_yaql_option2": "<% tasks(null, false, false, false) %>", "all_tasks_yaql_option1": "<% tasks() %>", "all_tasks_jinja_option1": "{{ tasks() }}", "all_tasks_in_error_jinja_with_kw": "{{ tasks(state='ERROR') }}", "all_tasks_jinja_option2": "{{ tasks(None, false, None, false) }}", "all_tasks_in_this_wf_jinja": "{{ tasks(execution().id) }}", "all_tasks_in_this_wf_yaql": "<% tasks(execution().id) %>" } } } ], "_comment": "other fields were dropped to keep docs short" } Task result ''''''''''' Task result is available by **task().result**. It contains task result and directly depends on action output structure. Note that the *task()* function itself returns more than only task result. It returns the following fields of task executions: * **id** - task execution UUID. * **name** - task execution name. * **spec** - task execution spec dict (loaded from Mistral Workflow Language). * **state** - task execution state. * **state_info** - task execution state info. * **result** - task execution result. In case of a non 'with-items' task it's simply a result of the task's action/sub-workflow execution. For a 'with-items' task it will be a list of results of corresponding action/sub-workflow execution. * **published** - task execution published variables. Execution info ^^^^^^^^^^^^^^ Execution info is available by **execution()**. It contains information about execution itself such as **id**, **wf_spec**, **input** and **start_params**. Executions function ''''''''''''''''''' Signature: **executions(id=null, root_execution_id=null, state=null, from_time=null, to_time=null)** Description: This function allows users to filter all executions by execution id, root_execution_id ,state and/or created_at time. Parameters: #. **id** - If provided will return a list of executions with that id. Otherwise it will return all executions that match the other parameters. *Optional.* #. **root_execution_id** - Similar to id above, if provided will return a list of executions with that root_execution_id. Otherwise it will return all executions that match the other parameters. *Optional.* False by default. #. **state** - If provided, the executions will be filtered by their current state. If state isn't provided, all executions that match the other parameters will be returned . *Optional.* #. **from_time** - If provided, the executions will be filtered by their created_at time being greater or equal to the from_time parameter. If from_time isn't provided, all executions that match the other parameters will be returned. from_time parameter can be provided in the format *YYYY-MM-DD hh:mm:ss* *Optional.* #. **to_time** - If provided, the executions will be filtered by their created_at time being less than to the from_time parameter (less than but not less than equal as the from_time parameter does) If to_time isn't provided, all executions that match the other parameters will be returned. to_time parameter can be provided in the format *YYYY-MM-DD hh:mm:ss* *Optional.* Example: Workflow definition: .. code-block:: mistral --- version: "v2.0" wf: tasks: task: action: std.noop publish: all_executions_yaql: <% executions() %> all_child_executions_of_this_execution: "{{ executions(root_execution_id=execution().id) }}" all_executions_in_error_yaql: <% executions(null, null, ERROR) %> all_executions_in_error_jinja: "{{ executions(None, None, 'ERROR') }}" all_executions_in_error_yaql_with_kw: <% executions(state => ERROR) %> all_executions_in_error_jinja_with_kw: "{{ executions(state='ERROR') }}" all_executions_filtered_date_jinja: "{{ executions(to_time="2016-12-01 15:01:00") }}" Environment ^^^^^^^^^^^ Environment info is available by **env()**. It is passed when user submits workflow execution. It contains variables specified by user. mistral-6.0.0/doc/source/terminology/0000775000175100017510000000000013245513604017634 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/terminology/workbooks.rst0000666000175100017510000000504113245513261022407 0ustar zuulzuul00000000000000Workbooks ========= Using workbooks users can combine multiple entities of any type (workflows and actions) into one document and upload to Mistral service. When uploading a workbook, Mistral will parse it and save its workflows and actions as independent objects which will be accessible via their own API endpoints (/workflows and /actions). Once it's done the workbook comes out of the game. User can just start workflows and use references to workflows/actions as if they were uploaded without workbook in the first place. However, if need to modify these individual objects user can modify the same workbook definition and re-upload it to Mistral (or, of course, user can do it independently). **Namespacing** One thing that's worth noting is that when using a workbook Mistral uses its name as a prefix for generating final names of workflows and actions included into the workbook. To illustrate this principle let's take a look at the figure below: .. image:: /img/Mistral_workbook_namespacing.png :align: center So after a workbook has been uploaded its workflows and actions become independent objects but with slightly different names. YAML example ^^^^^^^^^^^^ :: --- version: '2.0' name: my_workbook description: My set of workflows and ad-hoc actions workflows: local_workflow1: type: direct tasks: task1: action: local_action str1='Hi' str2=' Mistral!' on-complete: - task2 task2: action: global_action ... local_workflow2: type: reverse tasks: task1: workflow: local_workflow1 task2: workflow: global_workflow param1='val1' param2='val2' requires: [task1] ... actions: local_action: input: - str1 - str2 base: std.echo output="<% $.str1 %><% $.str2 %>" **NOTE:** Even though names of objects inside workbooks change upon uploading Mistral allows referencing between those objects using local names declared in the original workbook. **Attributes** * **name** - Workbook name. **Required.** * **description** - Workbook description. *Optional*. * **tags** - String with arbitrary comma-separated values. *Optional*. * **workflows** - Dictionary containing workflow definitions. *Optional*. * **actions** - Dictionary containing ad-hoc action definitions. *Optional*. For more details about Mistral Workflow Language itself, please see :doc:`Mistral Workflow Language specification ` mistral-6.0.0/doc/source/terminology/executions.rst0000666000175100017510000000434013245513261022556 0ustar zuulzuul00000000000000Executions ========== Executions are runtime objects and they reflect the information about the progress and state of concrete execution type. Workflow execution ------------------ A particular execution of specific workflow. When user submits a workflow to run, Mistral creates an object in database for execution of this workflow. It contains all information about workflow itself, about execution progress, state, input and output data. Workflow execution contains at least one *task execution*. A workflow execution can be in one of a number of predefined states reflecting its current status: * **RUNNING** - workflow is currently being executed. * **PAUSED** - workflow is paused. * **SUCCESS** - workflow has finished successfully. * **ERROR** - workflow has finished with an error. Task execution -------------- Defines a workflow execution step. It has a state and result. **Task state** A task can be in one of a number of predefined states reflecting its current status: * **IDLE** - task is not started yet; probably not all requirements are satisfied. * **WAITING** - task execution object has been created but it is not ready to start because some preconditions are not met. **NOTE:** The task may never run just because some of the preconditions may never be met. * **RUNNING_DELAYED** - task was in the running state before and the task execution has been delayed on precise amount of time. * **RUNNING** - task is currently being executed. * **SUCCESS** - task has finished successfully. * **ERROR** - task has finished with an error. All the actual task states belonging to current execution are persisted in DB. Task result is an aggregation of all *action executions* belonging to current *task execution*. Usually one *task execution* has at least one *action execution*. But in case of task is executing nested workflow, this *task execution* won't have *action executions*. Instead, there will be at least one *workflow execution*. Action execution ---------------- Execution of specific action. To see details about actions, please refer to :ref:`actions-dsl` Action execution has a state, input and output data. Usually action execution belongs to task execution but Mistral also is able to run separate action executions. mistral-6.0.0/doc/source/terminology/cron_triggers.rst0000666000175100017510000000070313245513261023236 0ustar zuulzuul00000000000000Cron-triggers ============= Cron trigger is an object allowing to run workflow on a schedule. User specifies what workflow with what input needs to be run and also specifies how often it should be run. .. image:: /img/Mistral_cron_trigger.png :align: center Cron-pattern is used to describe the frequency of execution in Mistral. To see more about cron-patterns, refer to `Cron expression `_ mistral-6.0.0/doc/source/terminology/index.rst0000666000175100017510000000021313245513261021472 0ustar zuulzuul00000000000000Mistral Terminology =================== .. toctree:: :maxdepth: 3 workbooks workflows actions executions cron_triggers mistral-6.0.0/doc/source/terminology/actions.rst0000666000175100017510000000337113245513261022033 0ustar zuulzuul00000000000000Actions ======= Actions are a particular instruction associated with a task that will be performed when the task runs. For instance: running a shell script, making an HTTP request, or sending a signal to an external system. Actions can be synchronous or asynchronous. With synchronous actions, Mistral will send a signal to the Mistral Executor and wait for a result. Once the Executor completes the action, the result will be sent to the Mistral Engine. With asynchronous actions, Mistral will send a signal to a third party service and wait for a corresponding action result to be delivered back via the Mistral API. Once the signal has been sent, Mistral isn't responsible for the state and result of the action. The third-party service should send a request to the Mistral API and provide information corresponding to the *action execution* and its state and result. .. image:: /img/Mistral_actions.png :doc:`How to work with asynchronous actions ` System actions -------------- System actions are provided by Mistral out of the box and are available to all users. Additional actions can be added via the custom action plugin mechanism. :doc:`How to write an Action Plugin ` Ad-hoc actions -------------- Ad-hoc actions are defined in YAML files by users. They wrap existing actions and their main goal is to simplify using the same action multiple times. For example, if the same HTTP request is used in multiple workflows, it can be defined in one place and then re-used without the need to duplicate all of the parameters. More about actions; :ref:`actions-dsl`. .. note:: Nested ad-hoc actions (i.e. ad-hoc actions wrapping around other ad-hoc actions) are not currently supported. mistral-6.0.0/doc/source/terminology/workflows.rst0000666000175100017510000001141713245513261022430 0ustar zuulzuul00000000000000Mistral Workflows ================= Workflow is the main building block of Mistral Workflow Language, the reason why the project exists. Workflow represents a process that can be described in a various number of ways and that can do some job interesting to the end user. Each workflow consists of tasks (at least one) describing what exact steps should be made during workflow execution. YAML example ^^^^^^^^^^^^ :: --- version: '2.0' create_vm:   description: Simple workflow sample   type: direct   input: # Input parameter declarations     - vm_name     - image_ref     - flavor_ref   output: # Output definition     vm_id: <% $.vm_id %>   tasks:     create_server:       action: nova.servers_create name=<% $.vm_name %> image=<% $.image_ref %> flavor=<% $.flavor_ref %>       publish:         vm_id: <% $.id %>       on-success:         - wait_for_instance     wait_for_instance:       action: nova.servers_find id=<% $.vm_id %> status='ACTIVE'       retry:         delay: 5         count: 15 Workflow types -------------- Mistral Workflow Language v2 introduces different workflow types and the structure of each workflow type varies according to its semantics. Currently, Mistral provides two workflow types: - `Direct workflow <#direct-workflow>`__ - `Reverse workflow <#reverse-workflow>`__ See corresponding sections for details. Direct workflow --------------- Direct workflow consists of tasks combined in a graph where every next task starts after another one depending on produced result. So direct workflow has a notion of transition. Direct workflow is considered to be completed if there aren't any transitions left that could be used to jump to next tasks. .. image:: /img/Mistral_direct_workflow.png YAML example ^^^^^^^^^^^^ :: --- version: '2.0' create_vm_and_send_email:   type: direct   input:     - vm_name     - image_id     - flavor_id   output:     result: <% $.vm_id %>   tasks:     create_vm:       action: nova.servers_create name=<% $.vm_name %> image=<% $.image_id %> flavor=<% $.flavor_id %>       publish:         vm_id: <% $.id %>       on-error:         - send_error_email       on-success:         - send_success_email     send_error_email:       action: send_email to='admin@mysite.org' body='Failed to create a VM'       on-complete:         - fail     send_success_email:       action: send_email to='admin@mysite.org' body='Vm is successfully created and its id is <% $.vm_id %>' Reverse workflow ---------------- In reverse workflow all relationships in workflow task graph are dependencies. In order to run this type of workflow we need to specify a task that needs to be completed, it can be conventionally called 'target task'. When Mistral Engine starts a workflow it recursively identifies all the dependencies that need to be completed first. .. image:: /img/Mistral_reverse_workflow.png The figure explains how reverse workflow works. In the example, task **T1** is chosen a target task. So when the workflow starts Mistral will run only tasks **T7**, **T8**, **T5**, **T6**, **T2** and **T1** in the specified order (starting from tasks that have no dependencies). Tasks **T3** and **T4** won't be a part of this workflow because there's no route in the directed graph from **T1** to **T3** or **T4**. YAML example ^^^^^^^^^^^^ :: --- version: '2.0' create_vm_and_send_email:   type: reverse   input:     - vm_name     - image_id     - flavor_id   output:     result: <% $.vm_id %>   tasks:     create_vm:       action: nova.servers_create name=<% $.vm_name %> image=<% $.image_id %> flavor=<% $.flavor_id %>       publish:         vm_id: <% $.id %>     search_for_ip:       action: nova.floating_ips_findall instance_id=null       publish:         vm_ip: <% $[0].ip %>     associate_ip:       action: nova.servers_add_floating_ip server=<% $.vm_id %> address=<% $.vm_ip %>       requires: [search_for_ip]     send_email:       action: send_email to='admin@mysite.org' body='Vm is created and id <% $.vm_id %> and ip address <% $.vm_ip %>'       requires: [create_vm, associate_ip] For more details about Mistral Workflow Language itself, please see :doc:`Mistral Workflow Language specification ` mistral-6.0.0/doc/source/cookbooks.rst0000666000175100017510000000024213245513261020006 0ustar zuulzuul00000000000000Mistral Cookbooks ================= - `Mistral for Administration (aka Cloud Cron) `_ mistral-6.0.0/doc/source/main_features.rst0000666000175100017510000002415713245513261020652 0ustar zuulzuul00000000000000Mistral Main Features ===================== Task result / Data flow ----------------------- Mistral supports transferring data from one task to another. In other words, if *taskA* produces a value then *taskB* which follows *taskA* can use it. In order to use this data Mistral relies on a query language called `YAQL `_. YAQL is a powerful yet simple tool that allows the user to filter information, transform data and call functions. Find more information about it in the `YAQL official documentation `_ . This mechanism for transferring data plays a central role in the workflow concept and is referred to as Data Flow. Below is a simple example of how Mistral Data Flow looks like from the Mistral Workflow Language perspective: .. code-block:: mistral version: '2.0' my_workflow: input: - host - username - password tasks: task1: action: std.ssh host=<% $.host %> username=<% $.username %> \ password=<% $.password %> input: cmd: "cd ~ && ls" on-complete: task2 task2: action: do_something data=<% task(task1).result %> The task called "task1" produces a result that contains a list of the files in a user's home folder on a host(both username and host are provided as workflow input) and the task "task2" uses this data using the YAQLexpression "task(task1).result". "task()" here is a function registered in YAQL by Mistral to get information about a task by its name. Task affinity ------------- Task affinity is a feature which could be useful for executing particular tasks on specific Mistral executors. In fact, there are 2 cases: 1. You need to execute the task on a single executor. 2. You need to execute the task on any executor within a named group. To enable the task affinity feature, edit the "host" property in the "executor" section of the configuration file:: [executor] host = my_favorite_executor Then start (restart) the executor. Use the "target" task property to specify this executor in Mistral Workflow Language:: ... Workflow YAML ... task1: ... target: my_favorite_executor ... Workflow YAML ... Task policies ------------- Any Mistral task regardless of its workflow type can optionally have configured policies. Policies control the flow of the task - for example, a policy can delay task execution before the task starts or after the task completes. YAML example ^^^^^^^^^^^^ .. code-block:: yaml my_task: action: my_action pause-before: true wait-before: 2 wait-after: 4 timeout: 30 retry: count: 10 delay: 20 break-on: <% $.my_var = true %> There are different types of policies in Mistral. 1. **pause-before** Specifies whether Mistral Engine should put the workflow on pause or not before starting a task. 2. **wait-before** Specifies a delay in seconds that Mistral Engine should wait before starting a task. 3. **wait-after** Specifies a delay in seconds that Mistral Engine should wait after a task has completed before starting the tasks specified in *'on-success'*, *'on-error'* or *'on-complete'*. 4. **timeout** Specifies a period of time in seconds after which a task will be failed automatically by the engine if it hasn't completed. 5. **retry** Specifies a pattern for how the task should be repeated. * *count* - Specifies a maximum number of times that a task can be repeated. * *delay* - Specifies a delay in seconds between subsequent task iterations. * *break-on* - Specifies a YAQL expression that will break the iteration loop if it evaluates to *'true'*. If it fires then the task is considered to have experienced an error. * *continue-on* - Specifies a YAQL expression that will continue the iteration loop if it evaluates to *'true'*. If it fires then the task is considered successful. A retry policy can also be configured on a single line, as follows .. code-block:: yaml task1: action: my_action retry: count=10 delay=5 break-on=<% $.foo = 'bar' %> All parameter values for any policy can be defined as YAQL expressions. Join ---- Join flow control allows to synchronize multiple parallel workflow branches and aggregate their data. **Full join (join: all)**. YAML example ^^^^^^^^^^^^ .. code-block:: yaml register_vm_in_load_balancer: ... on-success: - wait_for_all_registrations register_vm_in_dns: ... on-success: - wait_for_all_registrations try_to_do_something_without_registration: ... on-error: - wait_for_all_registrations wait_for_all_registrations: join: all action: send_email When a task has property *"join"* assigned with value *"all"* the task will run only if all upstream tasks (ones that lead to this task) are completed and corresponding conditions have triggered. Task A is considered an upstream task of Task B if Task A has Task B mentioned in any of its *"on-success"*, *"on-error"* and *"on-complete"* clauses regardless of YAQL guard expressions. **Partial join (join: 2)** YAML example ^^^^^^^^^^^^ .. code-block:: yaml register_vm_in_load_balancer: ... on-success: - wait_for_all_registrations register_vm_in_dns: ... on-success: - wait_for_all_registrations register_vm_in_zabbix: ... on-success: - wait_for_all_registrations wait_for_two_registrations: join: 2 action: send_email When a task has a numeric value assigned to the property *"join"*, then the task will run once at least this number of upstream tasks are completed and the corresponding conditions have triggered. In the example above, the task "wait_for_two_registrations" will run if two any of the "register_vm_xxx" tasks are complete. **Discriminator (join: one)** Discriminator is the special case of Partial Join where the *"join"* property has the value 1. In this case instead of 1 it is possible to specify the special string value *"one"* which is introduced for symmetry with *"all"*. However, it's up to the user whether to use *"1"* or *"one"*. Processing collections (with-items) ----------------------------------- YAML example ^^^^^^^^^^^^ .. code-block:: yaml --- version: '2.0' create_vms: description: Creating multiple virtual servers using "with-items". input: - vm_names - image_ref - flavor_ref output: vm_ids: <% $.vm_ids %> tasks: create_servers: with-items: vm_name in <% $.vm_names %> action: nova.servers_create name=<% $.vm_name %> \ image=<% $.image_ref %> flavor=<% $.flavor_ref %> publish: vm_ids: <% $.create_servers.id %> on-success: - wait_for_servers wait_for_servers: with-items: vm_id in <% $.vm_ids %> action: nova.servers_find id=<% $.vm_id %> status='ACTIVE' retry: delay: 5 count: <% $.vm_names.len() * 10 %> The workflow *"create_vms"* in this example creates as many virtual servers as we provide in the *"vm_names"* input parameter. E.g., if we specify *vm_names=["vm1", "vm2"]* then it'll create servers with these names based on the same image and flavor. This is possible because we are using the *"with-items"* keyword that associates an action or a workflow with a task run multiple times. The value of the *"with-items"* task property contains an expression in the form: ** in <% YAQL_expression %>**. The most common form is .. code-block:: yaml with-items: - var1 in <% YAQL_expression_1 %> - var2 in <% YAQL_expression_2 %> ... - varN in <% YAQL_expression_N %> where collections expressed as YAQL_expression_1, YAQL_expression_2, YAQL_expression_N must have equal sizes. When a task gets started Mistral will iterate over all collections in parallel, i.e. the number of iterations will be equal to the length of any of the collections. Note that in the *"with-items"* case, the task result (accessible in workflow context as <% $.task_name %>) will be a list containing results of corresponding action/workflow calls. If at least one action/workflow call has failed then the whole task will get into *ERROR* state. It's also possible to apply retry policy for tasks with a *"with-items"* property. In this case the retry policy will relaunch all action/workflow calls according to the *"with-items"* configuration. Other policies can also be used in the same way as with regular non-*"with-items"* tasks. Execution expiration policy --------------------------- When Mistral is used in production it can be difficult to control the number of completed workflow executions. By default Mistral will store all executions indefinitely and over time the number stored will accumulate. This can be resolved by setting an expiration policy. **By default this feature is disabled.** This policy defines the maximum age of an execution since the last updated time (in minutes) and the maximum number of finished executions. Each evaluation will satisfy these conditions, so the expired executions (older than specified) will be deleted, and the number of execution in finished state (regardless of expiration) will be limited to max_finished_executions. To enable the policy, edit the Mistral configuration file and specify ``evaluation_interval`` and at least one of the ``older_than`` or ``evaluation_interval`` options. .. code-block:: cfg [execution_expiration_policy] evaluation_interval = 120 # 2 hours older_than = 10080 # 1 week max_finished_executions = 500 - **evaluation_interval** The evaluation interval defines how frequently Mistral will check and ensure the above mentioned constraints. In the above example it is set to two hours, so every two hours Mistral will remove executions older than 1 week, and keep only the 500 latest finished executions. - **older_than** Defines the maximum age of an execution in minutes since it was last updated. It must be greater or equal to ``1``. - **max_finished_executions** Defines the maximum number of finished executions. It must be greater or equal to ``1``. mistral-6.0.0/doc/source/img/0000775000175100017510000000000013245513604016040 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/img/Mistral_dashboard_django_settings.png0000666000175100017510000013433013245513261025437 0ustar zuulzuul00000000000000‰PNG  IHDR ¹VŠ0¤sBITÛáOàtEXtSoftwaremate-screenshotÈ–ðJ IDATxœìÝw\Gÿðï\Ž^E±bCÅ.¶Ø»&jŒÆÔ'j|’ü¢1ñyb‰)јD“Ø»¢b‹ÆDÅ6ÀÞA@šH‡ãàÊüþ88ŽktAý¼_¾ôvvvæ;ËzììÌî2_ß@DD‚Úêô „ÈÄ:ÿvÏ-¨i‘‘QeæaïCð÷÷W༚c€ZÁXчÈÈHSÙô{þþí8'• €—@À35˜ tvvÑ^n×®-º/+Î9¹¹¹&'§Ì`à>…BiºD™L&•æ‘¥¥•……Ó W@-)ÿ‰ºJEb±ÐX9¥fùû·+,Tp#7H¥Ò¤¤¤ü|©&cÄ“H,]]Ý,--+ߨ¬Jœ¨3ÆÌÌDçéŽ!(•8ç©©O32Ò9ç#*Õ‘Éd±±]\\0žðÜTíDÝðsMuSõ8çñññ……2ÀÔ˲²2d²|†è$<5t¢®ÛCP©T:) …\ (š¨Ä³··spp$¢ŒŒôÌÌ,M§B.—§§§;88”§b¨CÄnþõ¯>tê§ŸÿI’×v0P>5t¢^ÆBaaannŽPXT«µµuÛ¶míìl5²²²¯]»–››«^ÌÍͱ´´433«pû |˜Ä£ëð‘AþžvbRIŸEGþ½ó@d†‰gLHZ~°xš7E¯_°úV¾‘”¢oι±›Ñ n©¹õ2Ʋ²²4c–––:t2™L“ÁÜܬcÇW¯FäåI5¡899V¬}PN̦íÄ÷Ç´*ú~X:{Xçç+L>¤ºè‹sUñw¼~JAÂ_?|þWÍ… Õ­æNÔËè!p®R÷KcÞÞÞ………KiÒ¤É;wÔ×8WêOU€êaîÚ±¹QVØÚŸöÝJ/,õu+rl=`̰î­êK¨ íAøÑ݇#žrR©G¼g,ý=ZûŸSú)½ÿ3£)=ZûùÏ7¨ÕGËf4Õ”ZðìAø_Á‡#R Õ½¡­oŸÑC{´ih+$Rä¥%Å\Úµñïx9“4ì1nÒ€öîVL]æç?ßȯù𪪹uS³Œ”J%cõý :m@"±Ô¬U(šñ¨N\Y¨"¢‚ôÔì¥öÀ³j1vîÛ6ê%s'Ÿ^“?¦/Ù] ;mH"‘&E=ɨôZsgŸ^“ÿ%L_´;º€˜¤ÙÈß﮹%²rjØÊÇNx,Ž{ôŸ>ªƒc©21a  †Ô艺©1¥R©ÙÞÜÜ\&Ó¿Tr7´¹¹¹\.׊O4¨²ØÓaií{×òÙ¿üó÷?§"ãr•DD§ClH¹uåö+i¿Ñÿ~§›½çû>*Cxøû§?]/º¡uoÃ)ÄU*•ä¿!·iÜwæœá^¶í:ºî{£pôÚÝ‘(óÒæ_‚#ŸšùÏY2µ±z+fåfGD©G¿ûþhœ CÉ5ªFOÔ ?µ¨TÆ4s®ÊÏ7Ü/Ñd‰Š2 èÔ .{²|eƈ‘ý;ûšê? ÿéßÙ{3[%viVŸˆ,ý'ÏóŸ\œYboYµ]®È‰ ?7ÜËËÊÙZHÄê·h@DñÇG$嫈 𬅠—îúµt<{®Ç©#!Ç®=-Ä@M©ÑuS³ŒˆD"uC¥R¹PS©¸H$Ölˆ¡e€š¢È¾|Ëw'v»uüñízMèv7$žˆ1"JÞÿßeÿ<-ý`# 眈‘ Ôd"Ý*Y£õ™/Ì“眘HHDйŠs^j+e敵ß* ܧmëÓ›{lúïš+™J¨5z¢^ÆÓNÅâ’ ÆŠcŒ‰Å%N2øbf¨VòܧÏò‰¬ìÜlœ¤ÞK¡Ö®C¦IÝz+1»ä&fU¡TNdæÚ¶™Ãkérn(¥T§@ë³Î*yz\:µtñ êëûàÏ»i¼TN^qhíÍÈóæ­ïÛÉ]|9ÃÔ/,¨’š;Q/£‡  .ÈÊÊÒ¯[ýjÍ0‡J¥R(5Ùøðõ;­´žfp7¥sejxÈ… ºÙ7î?s~ÿâuÖÌ]q­ ùFœª]SëNï|ÝIøÉ&Ý”/Nm¡}§rQAkQžúxàx¯ý?X¤©D½Ê¢ÍÜåïúhRrŸå*1œ PƒjîD]¿‡PjQ.WZY™«?‹Å"¡Ð!5õ™ö£”ÌÌÌ\\œÅb±&%/OŠ_ 5E >{üTêSÏ’å?½}~ÿöS) "â¹·v~ócòˆa=ü½]$ųL9'®Ì¸¸i³ç[£›Ø ©0ýÞÃôÂŒ›:) ÒÌ6Ò M#âÚ‹Ê”S¿­’¼>>¨µ›¤8$•B¥µÉžÞøgÛŸ ¿ jNͨ3_ßšÿvIIOurXYI,,,4‹œs™¬ ??Ÿˆ$‰……¹öÝÐ2™,/¿xù ̬ë·:ë½žŽ™¡K¿ Ž5ün¨AUÿ8ž''[qjzfmGPk¸Bilf@ ô zP=(”@J ‡% ¼¡œÌÄ"k+sK 3±H$ Va”ŸJÅ•J•\¡Êä¹y²B¹¢¶#xU¦‡  œl¬-Ís¥ùÙ¹y •‚+”*ÂÛ˜¡vˆ1¡P$IÌͺ9æÈÓ2r åµÀ ©Â=s3±{}û\i~BJ çè@íS'¥B©Tʲr²%KW‡ô¬¼Œ¬¼Ú àÅS±‚P(p¯oŸ‘#Í—ÖP@UÁ‰Kóód…õˆ€ŠªX¡¾³]®TªÝ=hèáÑʯ…³“=KK»yûvü“„jŽ ‚T*yÒÓ´õœ yùµÀ‹¤=‰…™¹™0=3W“Ò±½›V­4‹n®®n®®‘×®G^»V1T†25=³ž“ýã„g˜å·pÁ|"úï⥯`íjè!ØZ[äI¥š“­†mZµR©T—®^}CDM›4îÔ¡ƒÛ6ÏÒža$j\^ W(¬-Ísòd•ØüU>Wã®=^Ñ¡±“…"'5æêà Ï fSï"m¯æîªõNã\¥’Js3RbnžùëR<¯»Ï‚s}m¦~bò‘?ôƒúôÒ^<zº¦bª1A}zÕPØ•+¹æâ0­bci%³º[ùùÑÕȨÛwîªSnݹKD:µòóCj£Ü¼|k+‹Êõ^e£ftoÆØ¡•ßÜÔoÑm<ѯ&2£WP _/Y&0wlغ˨Aoµl´ãû=£ºº3?î᥽øãÙÇÆrâtÖìx±T ‡  劒'Í»8;ѽû÷µó<ŒŽ èÔI}[@­“+vbaµËÛ}öŸ¡"Εòüœg±×ï9—Ä•¾¦Î¹B–•xãÏ-Gcqn0öõ^>.–¢’«ÅêÓAÁ Ecu™(sÖ¤÷¨ 6ëÙŠUÒ´ØkÇwœˆæê­šô}£ŸŸ‡“µ¦'=ºu)4ôN¶~]ˆˆ¨™X‘ž|ý¨©îA:c EQ•g§) ó2R $Îö–f<ÿÙ½¿ÿØs“L´ˆ³1ÿù¢9çG¿^ɹó§ fZ’ô×Å+ž1Öá‹yC»»dÉ.Î84Ý·•—³µP™—{3tç?TzµSétnÖú‡úˆÙ³ ë~9™RÎ]7kpãUGŠö¶iª‚ôØ+GVÈ}k>iŠÕ×[¥TîŸ~åŽ4û¡LGŽÑ|nNÔ¼G >{¾Ì 97kÔ²•—³•˜ä¹Ïâ®ÝJ(`Œˆ‚úô {ü¬uK±àä©3¥75ðmÙ¤ž9“gÆßŒˆÉ#½«éšErt9¸6oí]ßF¸4ãÉõëó4µ?zÚÒÃÉZLJYfܵ12¦Qÿ­BÔ§Wxlzkw; Så¥ÅFÞ|Rh¨!&Ú«.ÓXHú­6@ªÀ;•cúó¹1¿ê2®RŠE•- 1ZG\¿béÒåkŽf4hÑkæŒR]bŵ=+—-ýaG¼Ä¾Q§×‡«ÅCþ5¸E=³;–³js¤î;ÝLÜ)a¬. zÍšÚÃ×-éÏŸ¿]¾áŠ¥O·Is{NžàSßöÞ¾_—~ÿËγñfí§¬t Q³÷&4­Â-òÈà•Ë–hNvMï´œ£?/Zy• ͬÝÍÏþ±tÕE‘¥‹ßˆ1e´Hµ/†s¢žfœ“ÝDKR©Èr²=qnÖ›ˆó˜Ý*""AßY“º6s¸¹ý»¥?n¹aïÓeü'}K}õé„JD\Ðb✡>b–rö÷_N¦”×ÝóœøéØ6fåÞo<ê‘çpƒk«÷H+s?˜öZiåÜJèÐÌ<)êÂÙ“n&Z4 ô+é±X¥Þ¼xN§{@D‚æ]ü¬žÝºtþä¹ ¢–eV¡SŽö"kÚµ•ճۗΟ¸p5FÞ°K[³’­l2î]=òLøÕI“w*> ?zÚØéx€]Ú°s'/\K0óîÞÂpCL´WÍXHú­.3€šS_:ž¥¥‘··vbÓ&M4«j0…RYíÅFlù;2QªàòôÛ»ˆˆÜFh¯Ý{ð^¦‚ò "¢¢ó›1þB":t8&G™}øé2¤±ºL8¦‡%…ßÉR&ÙADV=‹¶r"â¼0IèäfÇŸEGÛ¶B¿FøîMWq&húúûÅœsÞtá‚ù_}ñ–Á.˜¯ù£|ø~¦¢äú´é¶çJ–2û¬úó¾ËYʬ“œs¢f¦[ÄßžHŒYõ#²zÆèÔI"ÛI¢ÞÖŒQb°úZþ¸n":p<¡Æ?LDÝF™•ˆÆÎÙ\L‰§]súYùwýµzõyËÁŸMífp­!ÑDDänp]õieî‡2í/f,CPŸ^š?ê”6õÙ£ˆÄ‘"7þj4«ï§É|ëFJ®¡ ·n xtõIf'•4åÁ¥2£Ò)G{±UC჈'™œ²¤;7ȱ¤¿y3)«€WäÄGkRf-Dô0"!WA¤È‹ˆf®- Öh¢½¦Cªh«jT.¯* ±X$—}Þº}ǵ~ýNÚуGˆÈÇ»‰z16.¾B¨03±¸P^Í=Þ¤÷ÄÞ~œm,ÄAÑ úÚŠîË!)1VÔ WŸ€Ü.ZuO¿Ô¢J_z6Q—‰}ˆˆ¨ï—_ô-Ik®þg]XÂ{]Ý™DDòì„;'6…ÜÒ eJoG"ºôÃA·O‡7rîÿñà?KGåÿ©6‘ñ©ó´¦¯”¹Óâˆ+zŒr cÄ´67Ñ"ùöTú´^‡úfŽD·Âϳ¾A~“ÌêÕ#¢§ÛåDŒˆÔ—1Š'D> ¢âýg T5Os"Š?u.C]•o×·–HPþ±—¦DDdàÆ­j?ÒÊÜe9r¤é úW»ˆ®—,%yi’5;·4'¢›‰J§íE¢z½{6%*>žJ~.Å?tÆäŒ•k.¢Ö/6£ 1Ñ^Ó!U´Õ5ª=YA¡¹™™¦‡óöíV~~]:wêÒ¹“vÎvmZ'§¤¤gdTg¤'±°È“VómÊS'umÂØ¥m?ÿó8K®²ûÏ—ÿÒœœa¬ø¯ÑD¾DÍŠNÝši¯âœ‰ÄœË+>_,».Þ#jM´{Ñ’;z'¾y'7ÿpʶG=—íF5k=rlÈ­=:yÔ7!„ÜÌøÃkþÌ6Þ›KŒsÕ½µ)FÎèÊVÎfàsY-"Ùö4>ÛqÀøŒ%&¢§¬¥Û¤œó´-²¢xcˆš—ì+uw#ÚtÀËwÝ™7¡Å¤õùé§ÐLƨ|»Žˆ¾ÿ~»g‡¿ ¾®ß ƒ˜ÿ`"¢Øƒú«ªýH«Ä~Pûñìcu‡L3zPfWA#ƒ¨Ñã¢%w¢,­ð R›Qr^ü„8·Õ^¥SŽöbÑÓSg’Ë÷³ õFÎýì<ˆâŠ>º•ü‚Ó®ÑD{M‡d°Õ¦ã¨9˜e”—_`i!ÑN¹tåêñÐSÉ)) ¥R¡T&§¤\¼rEVP`aa1¨££CuG PBÀÒÂ"WZÍoLS÷ª2™RlçÛßðÄ}{#•D4¢o3u“¡C´W]'bŒMð² ¡Mó¡¥f}˜¨ËDûNeјQ-͈™Ù6h0dÊõªOìÝ΃¥=yt÷Q¼“hgÑô~íÓÿÙœDŒ „Œ‘üØ.ÝS ¨ÜNÓ0Ñ"Ƥ[sˆyyr.Ý-'’ï‘rÞØ‹QÎŽ¼âóª=Q "ÑÏÝŒ‰=‚†‘,<¤Œ*îÿáH,·í:kj;çT¾]GD¾q»¾¾^XŽS:ffߨÃks†˜qÙ½[ ¼ü»Ú´Êì¢ä#hžm:²X9ƒ!¢ëO¹w{wk‘Ȫa‡Æü©±‘—7UÞ=ì͉ ,]|Š.?=äÔÑÃZÄ83·ñhݶ¼µÇ)Zú{8HÄ™ØÚɳeÓùóˆ<ÄF×úø»Y‹ˆDÖþÞ<åŽáËj¯± ¶Út<5§c9y…Nö6fbq¡\®IŒ‹‹/5§())yЀþƒú÷ÿëï0’µÅÞÖ63GªTªªRˆöÜzõtš ÁÓ´ê2ýßÝ*raOþçÏIÞè0m^ îV![¯ØkÝxòÜ…zš¨ËDüÜ/kù°mûÏüd„H™ŸžôèÖåõêUk¯óAo ªggNÙ)®Û£?,ðdÝÚÆ éÜaÂì®LžŸ‘ð >::ÉoÀ ¿–ô¿£Éåor9R&ZDDÛ¥ôžñƒ™Œ1Ê áô£¼íYš¦É¯ÚÏGwoûÆÿu*¤i/žÙu\UæuYyäöV3çö2whêw&–g×ÑOG¢ËsÅ÷«/æqE4/39âÄÞ¿.ÆÚ¤Ú´Êí íÇ•ŸòÖ¥­Zùwó“"/íÑù[Ê2kTÝ ¿Ý¢UË€&æT+•ˆˆâ.Æ8µiÛ£©äÒgO®’K'Ó…¨ñG¢¼[6÷oliFÊüìĘÓù#s»tëé+¼9øb¶Kë®M-™*/ýÑùÛ†Rf{…d°Õ¦ã¨9Ì×·äv+ÿvqq B [c¹ííÌím¬“SËxI­££Ã   ‹ä””#Çþ®ÎxÊÇÊRbkm—˜¦R8VlÅ©é™Ï?*5ÎëýçË·9OÿzÉšºY Ô ÎÿùòsÎ_/ù®š ÄQmªòò2ÎYPŸî'N­Þªˆ+”¹GFF鯪سŒ2³ ä ¥³ƒ½±ù£jééGÿþ')9ùäiÝÇØ<V––vv‰)™»µeÎðö mD$²mÒgQ¬Yìµ[ Ô2‹ÎDDYÅbp`Ô5ÌÜ“¨Ö.ITBÏ))%Ë­¾‹£CZf–Òøs$323þýOU£¨ cöv¶sóø¤ôB¹¡')Öž-É÷s³f…¹™Ñ‘gN­â ˆÕ^ Ô®…ŸôQä%ÞØÿw¥ïWÃQ×ôëêžw¡¶£¨€ŠÍ2*ÚFÀœ,m­­ò¤ùyR©B©T©ª4Õ ŠŒ™™™I,,¬,%yÒüÔô<Ó·Ôî,#€Zgb–Qe^7ËU<5-/+[fgcáìà  ‚Ê¿y  êT*®P*r¥²øÄg…rôW*¯2=µB¹25=/5ÝÀCúà…kÿP=(”@J ‡%ÐC€è!@ ô zP=(”@áåôå糿ü|v]+ ê>Q·ÔóÜ…+yRi9ósnûÑçÓã<ë—e2«b/.^?`ÒÐ6^NŠœ´Ç‘G÷„g̦9;ç\¥,”f&?¾|ðø•¼Ww¿ÕYb±¸{`·´´´ë7nÖv,•WÕ‚o3oïÆž—#®E^»¥P(ËÞÀeœcDĘݛ.´âYëœàÃØŸ?ÿtCàܢˢ&2/Z¶’sf×läì1­½Ÿy凫Ï+LZ´lås««Î’XX˜‰yyy …ÂXž ý}š6½wÿzðB«jˆÄbQ·€-}}Ά]yk:³ÇëVDTxQi ´~Ý•~JQ§kÏcáŠüg/n‰’2fzçÔ¸çkA­<]lE*izÜ3»BãTÅã¼Í‡_½&â\©ÈÏM»y$är’^T|\žÌœ+dÙ)7Ž‹-ªÂ­s¿þmš¸ÙK…i.Ùža:$m.DDäc¦ŠHO½þ÷Æ2÷3c<ëþ¢÷HØ‘èj™Íׯ¶~cÉë³ÏG˜SØâo/çõ?ý|‚î\¶zÒ¼9%{‰ šõÒ§¥‡³•P™—{ë|ðÉ¥ñ½úÒ‹‰¤I“&"‘áÿ2 4ðôô,,,üçø?Dá€XµÝ‡`gg;tPß±#;9ÚËùûëâ¼`çÉ? 9'É(Oε3È#‚W|÷놫bßÞ³‰Ê\% œ9%ÐÇ5éŸ_ÿ·fÓU˦]FÏ ,99ëÐêÆ¡M¿/ûvõoÇ2øÎxÓ®TQúa…æDÖtfÅõC?ý°rù®‰{§‰‹R¦½Ý·U£üK[ýùÛ?Ž=°QfHÚ>&"òy{„wé`|ï1[ŸÁDDùçËÓ|£aj,c±[rˆ±nC‰¹Ž”0F9{–îÛ{¿=1 ‰ã­}ÿûö·­·ì}F|Ü»Ô!¤³W_¹99®®®œs¹!VVV5"¢íÛ·'%&Öh$ ̯P:@EUóÊî \_?³‘‡áÕ>#,£üX³³€3ŸäSjý¡c 9*ù““LjHØîµ2Wì!!¢S!²ò¤ó!DdÙ£ä<8bû™¨¤|)Óï$"r+Uàž£ÑY VÎÌ!>ÊRP^Ì_DDä[T{o[" Ý• U©òŸ];¶±Ì4]§¾Û˜qg‚Æ“f¶sιחŸÏ^ð î¹/?ŸýÕ¼YsÆzr.\{£<Í7¶±Æ&mN#¢vcDDä6ÁœˆžnNÕ cL ":x2¹€Š~]›Ø«/ nœX,öõõeŒpÒ̹Ÿ}öÕüOæÌÔÒöùÇ@èe«†YFÚRŸ¥Ÿ:ž˜”bpmë‘""ÊÛ‘LÄïȧ·,E#›Ó÷5îýCDD¥ºW©û}æÍéS’±)Ñ1"â^ÝÆ÷jÖÈÉÚB,]¯§] öÕñ23ß-úWFD¬xÃfDDt©tM„¤ížöDtùÇ¿\çjäÜ{΀˜_òÔãÇÉEËVrNVGü{¢—ÿûmÿüßõòÔe0l£ÍÙÃ?ðj:F¿.!Îc7äèΗiBDDÅ?0õÏÂS;Ã×÷ÖsÎ9yyy …Âèèhõ"cÌÏÏÏÌÌìáÇǎó÷÷‹7VÂ/}~á–å¿‹—r&0·©çå0tÆl‡•+Žæ2ª¥ ;ÍÝìÊÞ-¦˜9{ùuêòÝzÙ†¡^ ÕÖCÈÏ— »rçîc3f8o5BHDd=cΗšTáÀüÞÕâÓÊæD·ˆˆÑíÍ ®ºOÔŠ(ø›wõNL§LèØ˜±K;ֈ͑s›¯æÍ`ÆO^ËÎ\twu©´ûD­‰:/h,$mÎDD^x/s}Ãϧ·4o?mçüÞ†Tc³Ø£¼˜D3HÔ•èz¹ê2¶±Æ2¦Ø~“¾hí6½é+Æøõýr½b5#jFt›¨øgQÆ'/õp9::ÚÚÚªTªèèh"òöö¶³³ËÊÊÚ¶m›——WýúõMô Z¸`þšSwGvhRÏZ K»²sGh#"±C‹~C{·n`'ȳ“nÝtä/úq¸÷›4Ê¿¡£¨0õaØÖàðœÒ?&ÎÅ­‡Lìãçæ R¤Ç^Ù³ýt²‘ÃqUavòýðËóç~õŽÛÑåÉêxÔc ˜sÛîã'z;‰ñ‘w}K]ç6cÇwmêbIÒ§/îÜž¥O9­½—Å‘ìÙÀ7­ý IDAT£ˆ£"4;G»Ç¢Y\¸`þÚ°ý`,ÝX ÌÿýÜÃQþžN–"Aq Õ±¾ÀóW ³Œ”*UDÔÍMÛ÷Þ6Þ= "êÒWÀÏ\¿hÙJõŸU™œ1Áà.%Y† p·ˆ<ú "Õõ#Ú[\r:‡ˆFoæ$&غÿk¯ÏT¯ªc“¨D6¾}'™nB…2kì?MD}&·u—0™cËþošIÛîd"¢·z7´Ï8³5‰cBÆH~28Ûhuœ3ë¦ADDt³ÌæW®±ÊÃ9çŽã,9çgÿ4ðÃÜ¥$¢a}\Í©èg!»t´Ì_êNBhh¨J¥òôôtpp¨W¯^Æ U*Õ¦M›ÌÍÍ}||xùn)Ñ1Óõñ‘ +}óó¶Hûï¨?Ü"û®_ÿ÷í×ßÿñg|³ ï:iòÏðxòë÷‹~ÚaÕwÎ3Òýgv¹ð÷å_¯ÜxVðÎd›²›u„$ƒtM`0`óQô3»²eå²¥¿ì¿ã6M“Ù|ô‡A6×w®úvñªàk¶}g6×àT!M jãéhNåÞÆv‚Át1¼ír/dÍ‹–|£îüwñRtª:†÷äô¹‹™ÆOl‹õíMD”¶5[sLˆŽIiÔ÷ãOüû©~þ¢§) ³RF:§Þ‡å¬«üeìâaeÀ0‘òŸÓ†v…ü¯µèµÀ6c> *¥é.‡í>©Ò¤xI©Ïþoܸ!‰† ÒªU+õØËþýûŸ>}Ú­[7Ƙé‚Î4wÍ™è¶à«ñŒ)¶R¿÷Ô3×–®ÚWœ1ûþñ-Ôe¸æy¸¡/%0F”¾ñÌÀ£èàníbǘXËQzÔþà Fm.«qÑDCu’L`0àq-Ù‰ÅW“#J¾´%ôµ/ûª3óc§¶þ쀣)d§N]a+÷J†uöæ G³‚Ô¸;g÷þ}³°ŒƒÊØN0˜n"†ÝQI¯Æ P~Uí!ì?üO9s†~ûS(‘ö–»uѲRyJ=§ô¯mƒ«£„ð¿7…ªïáéÍOŒÄÀ“vÊYg1éÒñM¥oD0R©lìºP:1ôªçèz.PÙÍ7¶ñÆQÔ?E/б‚ëÇB®kßUQü³xùž_¤C¥RQ³fÍNŸ>ííííëëKDááá;w‹Åê &»8Sr<ç2VtÙ›7ê91¨µ§‹D, "ÎK ¿X²ée¢®:¥5%òýb^ß’yrå¹*ïM¤{Wº‰ ܈(¸d뢾†Ò/i c…÷C÷Þ%"Ëú~]Fqóû+¦#6¶ ¦›ˆ¯®ÐWÍw*¼¬ÔãöööÞÞÞ;vì˜3gŽ\.ß»woóæÍmmm5£¬š^†ðÆ”ÀÜ}ë‹IË.«T®ÿùr†fUÑÙ¢‰ât6Œ&º»ä›k¹.ÎÚ&é¶ò`PQG¢âNo{Mz rÉÓ…® )Ô¹æ­<ñÓΠ1§:[ 2®Ùd,ÎòüÜgI.¬[qQª¶‰ *Ø»&t„)Ë3â£vpÏ1Eé{VŸ7nâ¬~–”ŸúèÔª=úé×F)ôšäj#Ræg$ÞÿsÕõTºó«Ï6ž8qn…*ïYô•uÔ¼äF|c;Á`zyb ¢í‘)3gÏ$b¸Y^qÌ×·…fÁß¿]\\‚Тv^WðÜ8ÙŠSÓ3ËŸßÖJ"‹L<3—sž˜˜(Wªä euÄX 8/\0÷ë%ßV¹œž 8|½ä@µD¥Oç)¨e¦€>®P6j䥿ª®Œ!Ôeb3ó|iÞÓ§OML#âDææK¹"ÿyÆV˜M`U^pñÑ`¿½çï$)ÜÆÒ½ï«10xžÐC([VN޵µ­½ƒ‰YêœH&“eçä>¿°ªÛW³Ú'‡­ªôæÁymÆLâd.O‹=·:XŽ[~^P˜e¯¢ŠÎ2xɘ˜e„g@ ô Äs½áËÏg“ÖÓK¹­ÿ”™=¼DÊÇg¶o Ëxž‘€Aµw§rÃîï¿ÞÞ™²£B¶¾ÿ¢>à%S;=±ßÀ†5·”ÇÜr!çÔ¸çkA­<]lE*izÜ3»B㔬˂ÿ `”¾vÙ–dÆ8ïðåçDQ‹¿=CÅ#j\Y(ÍH“Iœì-Åìå!bìáñS÷•%ìGöÑ©Y yÒù"²ì1ˆoxLD¢~Ã,8ïHDtïp©ÒöEä(r.}¾š£È9Ç9'òÖdˆØ~&*)_AÊô»‰ˆÜ^ÓÞüб„•üÉÉcD$lWjÀ+¨®šÿ|:áÃ^îMǽ;úàÚ}wŠî@ð!"¢>óæô)ÉØ”èË?xB1«ŸÈû]¯^¶Œq·—iMŠ#bL¦þü˜ˆ1bZ3…¸W·ñ½š5r²¶ Eéõ´ƒ¹Wôo yT_+^HµÐCÈ ß»<جAý†¿kf¾ngTÝ'jEüÍŠ»zwœß–ÕïM{›‰íˆˆ’ªt2h/êm;eBÇÆŒ]Ú±îDlŽœÛ|5o+§9Ñ-"¢ÆDDô¤šð"«™÷ù×}/í?gt‹¦ß™j±asxnÈéœV½mGoöÛ_2ÈÚÕ˧]§ŽG¶ÿAD”¸í6ÿÀ1"zºSQj¡,B""RÊ T"ß“ô3 àwâ©m¯D¤º~¤ÚPgøúx— ê’»©?8ÚÛI$ ÂÏÑQ©¸\.OÏÌR(•‹¡ÖîÍU=øgùvÙû¯û7ê5}¦ÙößOoXO‚Úôžùñ`¡2?=9öö•꜌)ö“5ÈŒóŒmê Ц½×¦ö÷ ˜ö~W#)w7§q¥ìÙ½°­Gú£/´Ø„ÄÚÊËӽ法•å“ÄdÎyE   gG‡ä§©• ƒùú¶Ð,øû·‹‹KZØV®¬šÃÅþúwž½nñ¯¹ÕU¦ÎëÛà•âd+NMϬí(jœ¯7z/O÷š1„†înñ I•.ªÌ͹BÙ¨‘{dd”þªàùžIƒþ3s®¸»#§b#PAu½‡À¹ø‹ÙãxaæÃ“»÷d {P³êz1y MÂü"€ºì³¹·üǪd€Ê©…7¦€ÆÂó.˜ÿÕüOçÍ}æ¤]\K­ª½¸*©r1×\K_Ä}ø|¨ø€O?îW‰ ?›ûq…ŠÒÏ_u•^GMÄðr¨ëc/½ÿ.^Ê™Àܦž—_ÀгV®8šËÔéµZ…½ˆ1¿¢ý(ºšÆQ«±¨ç[#ÆŒA ö1®*ÌN¾~`yþܯÞq;º<™ˆ.˜¯>á;´è7´wëv<;éöÑMGîq¦Î°æÔÝ‘šÔ³ÈÒã®ìÜšÁˆˆsÛîã'z;‰ñ‘w}K]ç6cÇwmêbIÒ§/îÜž¥÷xß… æ¯ ‹åßÐQT˜ú0lkpx+ªë÷sGù{:YŠ-ùÆXQš˜9·2±Ÿ›ƒH‘{eÏöÓÉL›Eû¡ã{ú¹Ú ò_Ú»94Y}™_ý·~£œõL7]ò+®}%­R1· :¢SgK’¦F_Ý{øJ.cDôÙÜ7_ŽÖÆÝ^(O‹¹¼ëà•<ÆÔÝÕkN¯5E±f}'voæjgAÒÔGW¼’®Ÿ_d׬ç€@?7[ Sd§Ü9¾óÄ#bD¤âæmŒ l^Ï–åÅEÞuþ©v´*ç€)otV…mÛq)]»F"rî2$ä|–º4£Ú3´‡£DôÃ+Ô“”tF¾[þ£‰Íן»¯_ÀË=€:„G¡¡ƒˆ6j'~:¹Å©#»~ÍÈ#Ÿ^oMz÷â×kÒÕ«fº>Þ´á`|®YƒÎSß~? té%"2õA?³¿ÿX™Lõ;Žy«qq9æ£? ²9±nÕ•Dr ˜0mÖèÈE!…ú1Ìðx¸î׉äÚyÜ´9#"”«Óßv¹·vMpR>/OQ‚þ³F»œÛôûŽX¹mÛ3Þ™ñõ¶\"ùhXýs›ÛþXæÐªßX¢Õÿ]¼TÓ¯0¨<õL×/ÙtE¯oD™«äDŒ,†LïmsfË×R¨^û‘ßrã‡#E?Ê)î1[ÖíO¡zþ#'¾?èÚÇêóiíKïÚEMÒæiȆ½³•¶îm»$Z¯ŸÖØf率¬KÈÎ'«Æ]_;5âûÍ™D$zg°sØÎM{ãe¶¾ÝGmÐl"l:à½ažqGV¹§Ð©‘ˆÞ¬¿}û±$©Yývã§¾ÕáüŠ"Sšêô`Ó†ƒ©*Mùšð„m^ŸãuÎôæ«xùà>€:%šÈY'iéª}¥ç)8)²ïßBNÃ5«¶_ÏVJš¶•XOu⸖ìÄÖ«Iùœç'_ÚªÉ<ÎÚp)Aʹ4)lýYæ7Ú`¡‹ò„o<ÃZÒ¤‡ìŽRŸ¦—§¨±f'6†Çf+(?=j0yeë/:½îÂãl¦Þ<ºº<{¤<õ–³iPÄm0“†0FD#š³³;"’e*.K¾¼#Œ5M“K“~eÇyæ7´Ì¢dDævN6beö“ˆ¿Ö̾|ÝáK±™ù )rÙENƒÕéÃÛÏm¿Ÿ£ yúÝÐ’îCÇ1³†:\ßú»º{ S#¼–”« •4éj0±®êD:|ð¦v÷@Ceßçƒ.±kÄšÞÜ`u/Ÿº8† ý.3¼× ^1ÞDº¯?äzN jíéb# ˆˆó’ó›˜âó$Ær3SnD\²uQ_Cé—ˆºŒàbÉÇËD%ç@7µò”YTS"ß/æõešiLE§ø‰ö¬Õ¸òÔ[ΦaAÍa„X~ S}Þƒ(¤dM$QGÍ•’ôkDË,j×®óC»uèl%Oºs.øèC'âîÝF÷jÑÐÙZ"*u0{ýi¨ü^]ÒÕ RK&óh×HDqŇ˜€åiþ ˜hÔ]Cµ¨¨Ù¤·œÿ¸;•±O Vðœ-[²èó/¾4ø¡ºTµ‡00¨ç¹ Wò¤ÒòdVŸîk«Ð©?z ðÒcm“t›NâSs÷­ÿ-&-»@®R¹þç˦ ‰#êHt¡h©½&=ž¨ Ñ™¢¥ÎDO n@t¶ècG¢8­àJÎÒÊ,*šèî’o®éÝçCÔ•ètéDÎ9qNz™Ë_¯±ô2J~%©¸d¼%?˜TtŽý„¨#QxÑÊvD šœZém5éÚ»T§(Aâå#{.«8³õìýþ˜ Gïì ½Á¸ñómߟžW WòúŸÿû uz,Q¢0½h÷­92ê½·ÞTlÜ–©_£1&¥0¨¸UÏwú¥oúõNq¡¦6¨4ýÕ¥ª³Œ|›y¿ùƘNÚˆDÂrn²hÙJÍŸrf®ZŒuçLl[¿Y—S^ü-Yg­‘B^_¨Ú4ì4fj™¥ßáA“;¸Y0&qí<¥&}÷]Þ{Zgw c·®Ó{𻆯æ÷™ÚÁM˜ĵóÔ^üVˆÁÓ4éÅýÕ¦ûO›ÞÍQ¿FcL4J_“á3š^þãŸtV¹Í^JÕ0ËH,u èÐÒ×çlØ•Gѱ•(sI—Q£»7u²é~ïkÆ 4ãÚ# œ šõÒ§¥‡³•P™—{ë|ðÉ%c¤7^nÔY Ìç\!ÏÏ}–ô躥ºÿ Öºùö·ÿÏZ¤ÈK‹‰ØLTÆBÁÞ5¡&Lùx Xžµƒ{Ž)Jß³úĸqgõ³¤üÔG§Ví)0xq}]bó19ŠäiBW…ÌSfQÊ?í 3pj ³•  #áÚ™Mêtùá_?ê½Þ¶,çñÅ=êDÛ#SfΞ7HÄÊœd¬^céå/ùÕ!êJá;4áe‡×Ÿ6|Ì»½$”ÿ,úì‡K~”[’›»—ƒP‘sö÷#ruzðõÔiïαï–ÿ¨SÔöXÛ×F¾=ÒVP˜uh§:Q;?m9vgò€É³¬D izìõ]DEcòãk ñÚ´î¶,'öêaí¹@åƒk {wêtÁöõÎ¥j4ÆD£ô÷’ÏGŸÍÅ£ï–ÿX¡Í^JÌ×·…fÁß¿]\\‚жüÛÏþà-íÅ„ÄäÐ3aié™3›&$ôá¼v"ùÅ?žNwîõÎôºAsA¯w¾è*‘_ÜùchºsŸw¦ˆòÃ~úá4×d–G\}<:K/w(ÅÉVœjä(­#8/\0÷ë%ßV¹œž 8|½ä@9óWåi?Õ3T#_ïØ„Dí”Ï,ÿñhY§¼åy]q9‹ªFÏ¿F€çÌÓ½ÁÝÔŸº»Å'$Uº¨27ç e£Fî‘‘Qú«ªùNe÷®¯qðȉØ8ÃSœ©ôÕ}õéþ˜¶B":x2¹€±''QÀrV7¦‹…Ά]Óé’á=G£õŸøP÷1›@¢ÊŒÈ©}4Øoïù;I ·€qtïûj Ì„*Æ ÏDZWT×ñj,ªÎÖðjªæBê³ôSgÓRLäÑCð""¢ûEK1寮‰ =µ3<Äïx1}5«}rتJoœ×fÌô!Næò´Øs«ƒåÏ礪Š1@Qm=„ü|Ù¹°+wî>àeçÕKäCÔŒè6Qã²²—xLÔLwC\„—Á×K—Weóä3;)~²O…ºU™¯_Ř¡N)sŠ¼ÄªáiJ•*"êæ¦í{oWª{@D{op"ÖÇUL"¾Œe{Æ9µ5+©do”R½¡yñ†²KG+ÀKB(*•ÞKSNUCˆ{rúÜÅŒÌìòo¢BáŸë› ì8áó.¦.vn½œüV;×ásç /ÞPþ×ÚôZ`›1Ÿ•ÒôG—ÃvŸTa–*¼è”J•P(T*•ÝP(º89æäæVºêª>ËàET÷ŸeP-|}¼k;¨ͳŒ$NŽö¬â×¾U*ž“››“›ËMNïy~Ï2€ºCók^8ù2Ù“DÝ÷¨>Õp¼4ÐC€è!@ ô zPÏ2xiÕwv¬í bRž¥«?8ÚÛI$ 2O;•Ëåé™Y …¢r1 ‡ð2Óü¦€ºOûÊŽ••å“Ädnú¥†…‰……³£CòÓÔÊ…QWf}ùùlíw-×J uG%ºD¤Tªró¤bqåGžë‚æ žs®,ÌI¹y$äRbÅ_5¤f-Z¶’“Ø£Û´é=»Mëñâ•þÐ/¶ZÊx•ÕÎ}ŒäOÎ丹S¨þ¢:kÕC š3~íEÎ%£F÷hêd)*yP¯ÒΦ=݈s…,;寑àc±Œˆ¸W÷I}›{8ZYPAFòã›—ÏŸ¾—[ƒM€ªY¸`þ/­Ý­*WZuÕðœÕ•ût›t%4} ¯³Ù}+¿_»%ªŒ»³×ýôÃÊå»$vî&T'¾1¡½O=ëûû×/ûqýîs‰æþ“ª9t€Šà¢æ'ÍœûÙg_ÍÿdÎŒÑA-mMç_¸`~ùêœÅù¯¾\Ñ­ðÕ¢vÆ8‰ÝÇ=ýÓH¥ÁÔÑí„Dtø¯ølƲާvƒLÔòç£,ÆxÌ_DïùýMDŽDœ& \í’Ÿ<¾ñ÷ãUj @Õtš3ºÙ•½[›s®*ÌM¾{áHÈS2y§²ÎXBc""ºS´ôÐtEw‹þ•+®e}xÒ{]ÜŽMDò줻¡»÷ß1²=@Í 2§Õ¡÷²#’={qôQ„:sqë!ûø¹9ˆé±Wöl?̘úúŸúïÿ.^ª³HZS.˜¿æÔÝ‘šÔ³ÈÒã®ìÜšÁˆˆsÛîã'z;‰ñ‘w}KŸù ~³Oóv.Mypéàž ÏjgwÀsÁ¹MàØñ]›ºX’ôéË;÷„g1FD Ì_3Ê¿¡£¨0õaØÖàð¦>l dü0Ó?œôU±C‹~C{·n`'ȳ“nÝtäW×eÑ~èøž~®v‚¼Ç—önM.yýî3¦wSݰñœØ8· 7)°©“™"#>ò€&ÝX]ÆûúÝGë7ŠˆÜûM2´s ïLc•.\0ÿ÷sGù{:YŠ-ùÆXÌDÔ)¿åDÌ`œuí+Bÿàɦ†/spñŠè¢eñÁ‚Xüý}<žàEP;w*—,ú¦àœ ÅœË#òÒ^CԜȧ¨“à]FMŒé× =ü¿³ÖnÎ.®­Föõn5|Øþ;‡*Ó €êpª¦µ9y?6M¦}° úÏírnÓï;bå¶mÌxgrÄ×ÛrÕ¿ò5WuuÌt}¼iÃÁø\³§¾ý~@èÒKDd>êƒ~fÿ±22™êwóVãâÌoj—¼kõöèLnãéß{Ñ/„)Ñ//óÑÙœX·êJ"¹L˜6ktä¢Bõª×ýº3‘\;›6gDÄ¢ƒr2r4ªó<Ìô'ýcõÓÉ-NÙõklFÙøôzkÒ»¿^“NDâ! «nóoÛËZõK´Z³‰¨ÅÐnüxÿûoÉ ¶Ëlä‡AV'ÖýTÔ.Íám¬.ƒ‡½±FÛ9Æv¦±J‰èm—{k×'ås1sî3˜2~1ÃqÖ…¯zûçÉê–Ÿõ¢EgÔ‡ÝDÙNt^uñ>„DŒ±ñžf$´j>xˆöª}QJ"ÞËU,°l<¸% ÿx\·vî,ãÉã»Ô—¦TÕ1@e…­Ü{ËÞØ›³¾ú|öû“ú·2+5`vbcxl¶‚òÓ£ö“×芖¼-øj|¶‚TÒ„°­Äzªǵd'¶^MÊçñ›óÊÔºº8ËÈ´­)n£ÞìáfÍ ó²b¢ÂM/ÿÕ€ºfm”b@¯‰A®6"e~Fâý?W]P_6TžøigИS­ ×ÎlRçß™2sö¼A"¦ž¤³X¦‚½kB'L˜òñ@±<#>j÷£Nßm3bü¿ÆÙ ²#öm¬‘¦BQ°gõ‰qã&ÎêgIù©N­ÚS ù]—Ø|̇AŽ"yÚ£ÐU!…êtcG£1'cuí¡›oyûÿ¬Eм´˜ˆÍDE—Øå‡9<|ü¨÷zÛ²œÇ÷hOÈaÊ»ÁË÷Žžóö‡‚õ¿œL1Ю_OŽŸ8eÎ@±<#>r#5|Kn¬®Šöwޱi¬ÒòÄÌÆ¸Ñ¹ š!nƒqÖÁ¯ƒû‡ˆ"Ö§{7{ñŒÙ¼H˜¯o Í‚¿»¸¸¡EïlxÑ9ÙŠSÓ3k;ŠZÆyÏ… ¾^r ì¬ðªïì˜òÌè´xzU1ì‹y‚ÅKÔöů }E˜8xxë9 ›¬øß4å£ý½ÝÐÝ->!©ÒE•¹9W(5rŒŒÒ_õâÍ2€ªøh°_[Æ,t™H÷Œ½²jÍ¡%ßÔb÷ z¿"8s8œØ_á© v½x³Œ *‚óÚŒ™>ÄÉ\ž{nu°“ôàE¤“ F?ªK5~E,\0ŸsEʩ˛;Ô˜e¯"Ì2‚WD…f@­Ã,#¨~B¡PY¾µ„@£Tª„Ba%6 ….NŽ9¹¹•®ú¥ºáËÏg“É÷²½Ð^îÖ@ ©ïìXÛ!@eddf¹Õwa¿“G¥â9¹¹¹y•IU{ƒzž»p%O*-3gŸÿ›Õ±g?¯X[ü®GëÉ_ýˉ󋋿 ¯b5„‹½û iß²‘³D È—e§%¯ÙVù;àž3Ü„ðâÊ—Éž$&×JÕUí!ø6óönìy9âZäµ[ …ÒDΓ§¨{ršlGk²Õ)Ž“‰ˆN…U×kTª÷ä›Sóis6<=³{kXŒÔ¬¾gû®}ÊÞ àEV ³ŒÄbQ·€-}}Ά]yk4_øIUï¾Ìnœ _›ÊçoÚ窣áDŒ8§Æ=_ jåéb+RIÓãnœÙ§bŒŠ/½k,Z¶’{uŸÔ·¹‡£•d$?¾yùüé{¹Tú"=ç‚f}‡ôiéál%TæeÄÞ:|2F©W ç YvÊ#ÁÇbu{)¶S6²¼Ý;ÎÄ0"’§DŸÝ]¼U… פki iÆKxªíNe;;Û¡ƒúŽ9ØÉÑÞ`ÆnPcÖS܈ˆÈm¬ c¤¾ñ÷Î?ôëÓÅ‚ˆžL. ù““LjȢË`í !>ÊRP^Ì_DDäk -DDm¨!U/ÜXÓÊ, FUó³ŒRŸ¥Ÿ:ž˜”bxõƒƒ|´™ùˆ¦üÈ3â¼p烢{|ˆˆ¨Ï¼9Z3ý›Ó,<Ôši³><é½.nG&"yvÒÝÐÝûïèVÕ„ˆˆî-Å‘§v†»EÿʈÈàMâ D‰—R…kÓnZ™%Ô¨jë!äçËÎ…]¹s÷7ž‡±'Ûòiº¥Ù¸ ¡"Ƹtßãâ“ãûD­ˆ‚¿Yq·sƒÿwÖÚÍÃÙŵÕȾޭ†ÛçNžÇD͈šÝ&"jLDD¥ï‘`¬ø/Ã&Ñì4´1-¬»ªê…SvÉ5©f)Uªˆ¨››¶ï½m²{ ödG‰:‰ˆ(wGÉó›BþŸ½ûŽ‹âZÿþœ]鈀Š+°a‰¨h°‹ˆ½57M1åþ‚¹¿7b¢¹išƆ b7± (JQA±QAQzßr~,,»ËR²øy¿x™™3gÎ#aŒˆ87Y°êƒ¸o¾»Ê™¢¾âOù!*G¢ßÏmÁ NÍM›ÈòŸ? 8rý¥âp\ÐqÒGîFA›Eª© “Nf«½viºûÇ ¬¦o/]RÚ`»<Œ/nÿá~¦¨­Ëg Çœ[w®rc¿ïüGÕÉ¥9®ï¡J,ãБœ•öéØ,ÇïëL¶F±á­qïºØÙ˜5ÑcŒsQI†ðÇÕÄ<ÆøÃcDŸ©©9¡ í |–ÇXnà~r^*¯èaNæSÞ[IDDœËˆ”d¥{ ¡3ˆ~‰ÜöÐmù;‚uII0fšAð·/ˆTÿŸ¨,’S÷2§›dó2-)6âºÿ¯Tú?²û'“Zùæ×Ç%«ªª•gݬ©æ'ÔKKϬe µÍìºvêdÛ>,<*"*Z"‘j²ËÚõ9‰Ú ^ø®óàwÅ{ïN«îAqi:!zË ÷/úo¸Wþâ{ádûä}›÷=Í.’6ý¿Õ*Êó#"ÆÄŒ Ô×”);Öµëh”l—ÌŸf¹GŽ®ü Í_¿,pÊØ÷µLݾJ#yrlËwÖ»uì4háÊg¿Ý-/?îï‹i³úüt ¼ˆ©©V^í‘@ªƒQF"‘Þ`§¾oÙu¹|óñ“Mva$Nºv”œçQ‹ñD»©Üí¹µë7r.èê2~Ä[m¬Œ…Ò¼Œ„èk¾㤌Ñë_ásN¶ÎãFõlom¦'ËOO¼såÐ¥DY饒̀‘ïôîhÓÔPPüêaèéîÃçËËË· æ†ïðö,—nmšPQFjüݰk±¹ŠP+ìÈ{ôÕ8=Î¥’‚Ü´Ä»§ýÂRT„q$$£ì"»YË]»ˆØË?·\ÆíF…I.®[w±ÂwóÆD\R$Ö3í:|¡úݕֲØ&â§§•o dqÞ» Ý.RÉŠ }ün<¸}õñýG9Ÿ/v&*¹ôç±þߟšöÿ>™E?/fªª@ƒUgóÌÍÍ&ŒqI~–zéJð«š~#(Ž Øò÷“, #"áð%3 Å7þp)ÝjÄ{ï:¹}"Ûô} ¯°‹pÈÒyCŒøýS¿lO0°p©ó”żNDDN —Œ0çO/ïÚ'…7ëåìvdýÆj š3£O'Æ¢Žüq:®È¢­ãÀY»}­ŠFúö¼sbOØÝ”bÓn?v²xÁï½ÙJÃ'EDħ/sí"bϯíÝv5£âá¡1Ú˜ðáÜOxNÊ#Dó«[óþæKý—Ìû|¨äÅݳœ•&þv lî˜w?q=»}ÉG^Xa"Á1óñ}a-Ìyr~ó#y>Àr}ý%+'þZ¢äÂèWó?Yé®ÇþëýÒHöet˜:k¸µ©¨8ûùÝs;Êï+¹ãû½Þ¬ÿ÷É þÓ!5Õ aªã™Ê­[µœ=Ý-àô…„Ä$5Õ8‰Zñ "zqª|ù‘3O²J¿w÷h@DS‹KºxŽœÆ K§+45y¨!]ö{˜ÅXæ5?ržc4t4]?GD“‡›Ñ¥}‘ÉŒ½Œ:·[i0j²…fDœ§ -Zš§&Åß9GUM" ßE¾~?€hÙŒ#:¨> ÏqÝDôìÊ®?®g«itNùKó %ÅAû*-=óMåúêk²‚}¿„p&²¶_ða¯«%…,þüŸ[Ï«!îÄŽïN”T—ÿ‡sƒì—×þ|¤tÂsÿmÿ+]VÉËkǶ^SÙñâˆÞDÄHY5hÈê8CH{™~ùjȳ”çjê¬öòäœËŠsSï_?í÷¢üÌÅGå–;у’µ8""j_¹µ.DD4båŠeeå35»Qhµ;QfgHÊûmF»O!"qvÊýK‡ýï)¯É; ž>¬k;K‘@PÒ‹æòMjÂh§ODϯg©›$ ðº5«¾ä\&ÎI¾ñGh?9kV}ɹ8ñÜO»‹ðÙ€×ÔY†PPP|óÞý‡‡UòÚwöªçVÆu%êJCDdKDDJ&9< êIäûíÏ÷+5õ€¨Ñ¢ƒ™èûÃU›6VÖ-{NvéÔsÒDÿ{'”Öœ7£Ÿ-c¡þ¸#æ¦_­\ÌJãQÆÏ¾¾˜ÖuæC~ýíZfý>Ó tW¹›š~fÔÜÓ¨ Þ©,•ÉÂ#ïîÙ4Fƒô@sG#¥D4qDË&¤×ÆÅ•ˆ CÏT®æ˜CDS&uµ‘@dÒª‹ã¸Ù%€ôÌ&¢sí[2~³·ÞY@D/9'"{ý²HW{yV˜'­ðÉ´Á­YFRüýÇ©D¤xŠLåF„DD$-,’é™Ú¹Ì*߈Ò0J<9ûã٧ܬßÇszèñ:b3Ih|j‘T*ËÍË·hj^ããÖÁ(#‘Ho°Sß·ìº\ ¾ùøI‚&»0Ƴœ&zŸ„ýˆnQ¥¯5YĹá@÷)ow¶4Ò«íUoçyËg¶¢>?c¼Õ„Uó:²äãÞû8'[çq£z¶·6Ó“å§'Þ¹rèR¢¬ô"ÛfÀÈwzw´ij((~õ0ôô‘ U½PܸPÕ/Î]]Æx«•±Pš—‘}Í÷bœ”1zý¦ç’ÂìçwNûžKдËÒâì„ð ÅWïöÆß÷E7íâäöÉð×þj$·Olú~㇒ Í[÷Ÿ9ººä·/µs-9œš~‘8Ü÷çï~ÛuKdm7ÜsŒ’4RiÀŒnìË!Æ,ö'"¢¡CcX~Çœs|ók$ê[Ù\óÙ°9\ÏȪûÄñVù›ÙÁñûS?Ü®PAÃ{ ˆˆèAÉZ\µâTòes±¾Üјq~{_Éçôv!"¢+WŒ(«×™èu%"¢PÕ‡¨Ö‘ŽDT±/íËW¸_òßB"b5™KЈˆ’å+jú%ûZ$Jò=U³‚€ ’å#õ:ý«Ã03Æ8O<Ê©üéD"Æ åËñDŒ½Ö*«õyhpÖ¬ú2ïèºïï•õ…7›û¶û¯÷7ZŒªN¬Yõ¥ú^TY´¢Î2„‚‚ à›÷î?Ôä˜1Ê‹»@´˜ôݦ’Á9BçbÆJ/g«@Ô…¨+Q ‘m-b'"b=ç;2&{Æ­zÔëÒæ»DDˆzù~ûóýJ£ˆz ºV®°½‹'êZ±/¯Ïè`¬ôš`ö£ˆˆÿR¯ª_r݈¢Ë"IªVÀ×þ̹ ©éL"¢”Y…C”_UvVÕVËóÐ0åM™¤ïPÌqNC–T=NO'àê@GÕA† •É¢nDŽފ***ÖpΙIçQDDtW^r‡¨7cÓÛëÿ™¬ßíñš4rô÷êÍ&Žhùðò«æ.®å7UwÒ*ç?˜Ð”ËîoÛ:ô‹y=ÆÏéqg_ c~9=‡›M™Ôõ÷³3Ȥe‡.ýûÞ¿ˆü³{ 71×>þØ©E÷a£ÿÚ£¦/9·bÌ^ŸG+¹¼=)]é¨7qDËÇ—_Y¹¸Qaè™*ÃÖ¤›‘Y›žý'»Šxá£Cûóå×jú%7ѵuâ…fÃ\‰Hvûtõ~ög ÿ°cDôâ „ªóˆÕ*SJý¹mø¶Çôü¸SÀÏOˆˆÈrþ¨äd»B¾IdÑ}ä„á½Z™ ÄÙ)1göœŽåŒˆÖ¬úrëåû“ûvln"(LO¼yðÀ¥ ¦¦>çfoOŸ9¤“¥H’ñ4â¸í EòËwÎE½ÆÏÑÃÆBO’žpóÈþÀÔJéë1vÁˆn­Ì yþó‡¡G®¿$"Î úL˜îÜ£¥¹ />ôèÞK©ò¨¶=rwloi¤·vÝ·Š[kV}¹#8Îݱm3½â´GÁû|Cr[³êKù&B.QïÖ¬úrgX’‡}+YúýÓ¿‡´š7µOSYúßßýÔ}êóƒWùc“Mm?Yåàýó“’tÚàÃU_ðþ߃Fö@PÛ !!1)0èFFf¶æ»”<ª_RœõüQä‰ ù¤ÿȦã{t˜õÁjןÚùw“ÉCúÍðøÚ.œ ‰ˆsuéJùã¬]¿±ÛÂqV$»³ãlcG·Çt{¯Ç”E¶1»ãyð®ä:ª÷𥟌J ÒSbn(Ù-d÷Ù(W§yˆ _> ˆVÛ‹}a©‹ZNútÅ$e×ôâ³;ŽÓ¸!½=>wJóÓ‡¾(SÿU¹&Ý\õÅr.-.ÈÍJ¼zì\xRiƒêúEDD¢¾ÓVô%.-|¼ï´¤r$jfLräœø«1úœgüYT­¡êÀ”Rn>ñ±¿ÿ3Z°îœ”ØÐ¥­®~—çüŸ’MÿžÛýòéC¿%dä‘i—a‹fýëÆ×[Kžm¿´eüž]Osõ[ ˜¿ä§Kß„ª©ßÄýÑúç·oŒH¥ý<)î¸ ÞY>Å:h϶ b3{×ÅïÍ ÿúÏÜ á-qwH=´eÿ“LnÚÞqø ¢ÍD$¿lb‹ ½¿ï/´è9r*Ñ–’ÊÖ±;¶ú¦T¼•¸¸Í£?~;øŒZ˜¶p…[øÚñ½¿©0ʃŽêÓlüûÏÄ~6mYÇгÛ~~"í¼à“¹Ãý×RÃøà‘’MÒ–‡Fÿo­½BDDMfZDzðO¨m†àò/Í++»z+ùåίìÙr¥Ê½Ê/3–ì÷gpåšÏäœ'œÝ¦q ô`Ï&ïÒ`XÆßßnø»ô”r~OˆÒ–(%ôï=¯ODPÓ‹œ‹‡7]T cE·ÏùÝ.?ݺôŸ½ —­V»›eÿŽªïWÅÝ•E¢&`"¢è$Ó‘rŽå•+Tõ÷øz³*Sy”[ÝÂXØÞÜ•ï·:·Y²ÐEìû5gÎ¥›¾ùåXéböƒ¿}hà$¢Ýòõ?}o=eŒH’¼F¾/Ÿ£ªþ´·Øï[)Œ¥†ú\·ÚE^iª“þïƈÒ#ý}ÝVM!Ú[!¼B¢&M­­LóŸe%„ß\²££^ ÷õxƈÒîžÙ¢¨ìw82EÙۥݡɌ¥„ì¾2z•;®ÍƒÚó9ó “1ëËØrŸó3ãѾäþQ 5Œ)ûØú[ºj–aàÆD³lr}žVïKÐLÏTnúϳ|uu§OTMž)¤Cl7†­ÞYj˹äþüã­¡ÄO­–Æm’·<*Òx;癣zµ·65 ˆˆs™bS\é…8c¹Œé«¯ßŽÈ·ìháD%j‰ìþ³Ò¥lÒ·’iD{÷º6m¸µIñ³»—ö¿Ã‰È–èX媊Qƒ•Ü([ #¤´n Ô§g%wür‰(¥t™1‘|kCøà‘² ca»ÒGý« ý”Äf·ÉØù¿càM´~ÝZ¯ÿ¬VºPWa†póÇM7µC=h˜Ýä\ôÏi¼8óÑÅÃG2ðO·ÆòöÝ®N·Ö½í IDATf½vÅ3gÞÜc;{•]$–ÉZþßêÅê›QU?‘¨Ñõ’Z}õŸÝ_÷m”Úq,éºÿŸ×9gM;¹zÎZpüÎn"Š#$ÿ¶¹bmåM9]-YìG”(_âœçmây£Ð>x¤âcóì³OÐwí;¼Ø›‰¼‘É@å…ºRÇïC]WË÷Ž1&^»~£÷{†ÔaTctnÝ·_¯;Ï_¿â1&’ˆ‹ Š%BÓ¶ý=æWÙŽªú¾÷ø¨¹}m 3l9`^Ù³d}ƒ Üæ îØLŸ1=c»aS—UnóÓ©oÛ57„2iÙÉG"%Ãjo& }«žc߯2°óûÚ2fØrÀüa<ÚO^˜FädTÖ_ù¬eh´õÁ«ðPú±aÅÇOÈFMwïòto^t”i„÷'î.¿ä =IÞ«¸ð½DU|•«ª~ÑÑ­—f̘÷Éh‘8ãiäÞÞC^.½°éà(Ñó‡X Š2’£®ì©Üæî'¦nÓ?žf.(Êz~l·¼P|róÉIÓÝßnÆrâoQ5¸HágÝ<>ÕLOüêñ¥_üŠå÷ öG<_ê¹rŒÃø¢†¦!|ðHÅdžˆÂw¦MüW¶÷ 1ø§0;»îŠGG‡ÄÄd¡™¨–f¢´ôLmG¡œ;¯YeñõºãõvD<¤H‹ZX5{þ2]ÛQUÿƒ§æcÃ{­XÓñç¯ëï# PÊÿÞnÛÚæirJ›ªrw.‘¶k×:""²ò&Œ2x#,Û£•cF­.B±§´¼)êüƒÇYË™“øqÿê¼µª £ŒÞ¾y½=ÞoÙDü*!h‹¯óƒ¡~Ôíoͪ/9—<¿üS•³œ 60ÊÞDoò(#x£4œQF  Œ2‚7Åj/Ïò/±€”P(”–{auÕë(#ùebm¦©»x §Yzw°4伊8s$$CiµÊWÒoæéxÃI¥2¡P(•Vûõ¸B¡ÐÚ²YNnn]« aòD×öm[W.OHLò?ùWmZn|&/têÂØ©_7ÝXuèF´[MedPWZX5ÓvP™Y6-¬Yõg^Éd<'777¯æï©U†põZX»­*Ä-“É®^«nS¼÷G_Óã\*)ÈMK¼{Ú/,…1zý;uÎ%…ÙÏïœö=—ÀˆˆsC'÷)C;[é• ¿¶®p³¢Âªªc©is²u7ªg{k3=Y~zâ+‡.%Êä{ux{–K·6ÍŒ ¨(#5þnصÀX%›5uÑ—…§§Ý>¿»ºç§Â½…’¨49iÒâüŒW…†–MD¼àÕƒ¿ÿ<CjzÄiüW^9¿à½á.çŸyÍ3¢‚-ë·½b¬ïËÇ1výϾÄ8tu?â­6VÆBi^FBô5ß‹qÒJG§×³.²›µÜµ‹ˆ½ ùsËå—ž:¨1LBÐ]……IÏRµrèZÍCx•ž}ïA…Âè{_UhßžwNìÙ¶~ÖßÏe¶²²xyù­’Û'6}¿ñÇCɆæ­ûÏ-/yw´•þcÿ·Ã'RR¡AÎU> OÕ±Ô4(²tÞ.-Sþú퇭{nu8eÅ’,bÎŒ>]š›<ðß¹þ§‡ƒž5qœ¥ô ñDD]–¸uRX•Ä‘›¾ÿYqÍ­þ¤åœûã›_#™PßØÊÆàšÏ†ÍázFVÝ'ޝ¢GütçDƒšpNfîF$“‘á3â\4ŒˆóÄ£œˆH8|ÉL§ŽÍ¢ý°á÷}ÑM»8¹}2\ &T"â‚ÎÓ—¹v±ç×ön¹üRóSõ©¶3•ƒC#ŠÅbÅj±X^ƒvÂ÷_‰L)4ý~‘͸ò[ýN=Î’P^ÜY""²“NqÑɳO³¥yqgþ®Ô¤Ê1[ªŽ¥¦ÁÉC ‰è²ßÃ,‰8åš -ITšq^œ*´hiNiñwÎÜ^ùˆ‚Aóÿe˸Œ3í¬¥"Î9ï°ÚËsÕ3•F(ŸÚ[y‚ï‘3O²$e÷7ÔŸ´cá9’œ%Ë·r$9Aœs¢Nê{Ä?˜BŒ 2žeBtí‘Ù ¢AÆŒQJ€üΉÇ@" ¸˜ZD⤋çˆÈ`àX5¡‘‡ç¸n"zve×¶«šŸ:¨gµ©œŸ_p3üÎ`§>òÕ°›Qù…Õm„w<}X×v–&" dÌRóòî—ü·ˆƒšl‰ˆè^ɦG*ý+{5ÇRÓ`""±rň²²ÎDçˆhgHÊûmF»O!"qvÊýK‡ýïQsœ›QØOg[~:¦Õð®q›óÆTNlˆTÏCxTn@W•'-‘ˆ±’¿‹x"ƨüx05=|EŸZõ±µ± Š ¿Î†Žìî®oeED/Jˆu$"¢ÒûGqDDÔ^U¨ríô‰èYàõ,y ¤Ù©€zVÏ2 ¼Ûë­n¦&Æ99¹·cjмýl =ðÇ…„17ýjåâŠs2+ý£LQ7¢.%×ôÊo✠Eœ‹#ê á±Ô4ø€¨'‘ï·?߯tá›èûÃU›6VÖ-{NvéÔsÒDÿ{'*Ô±""¢âØÌm½Þ}«IŸ…+ˆ8ç±»Ò—ËÕ¥áIS²\U¨èØ+¾´Ù;n6Œ¥ž!¢—¬G‹®œóŒ}E%ñÆu%êJCTš[%¨øgß_Lë:óƒ!¿þv-“1ÒìÔ@=«ƒ÷!H¥ÒkÁ7‰èZÈ­<‰ˆ„òv ‹dz¦v.šF?)%¢IÃZŠF¶cß)¿éclz{}w;^Ãc©iÐ/0‡ˆ¦Lêj)"ȤUÇq³—Ê7}2m°Ck–‘ÿ±|*‰’GÏN%"Z4¼mÓŒ+ûRˆ1&dŒÄ}³5ì«5;i jzÄXÁŸ9ÄÚ·á<ÿ°˜H|"ŸsÛöŒrŽå—¦G#¥D4qDË&¤×ÆÅ•ˆ CÏTqÈ'g<û”›õûxN=ÎI³Sõ¬nÞ‡ðàQœMKëâ4©\~l½|8Íž£Qóßéá´ðƒAÕyœ“øìÎs†S†˜á5¸â^þ"›ŽïÑaÖ«+5¨æXjäÁ»v’ë¨Þ×~2V(-HOMˆ¹y@¾içþÎ׿fM¨8ûù“;Ÿ¬|[ i·Ï…É®ýúL^6‰ 2Ÿ=J~÷¼û(—OF¿øéü Í»¬aG4¡¦GD”y¨€–?—Å£N³åÊQtM|vÇq7¤·ÇçNBi~úã°àÃe•ïTT ŽòûÅxާ³ëŠqéߟy®É©€zÆììº+V“…fZ ¨f8·újåÎ3½7ìi˜ Bƒbi&J«þ· .‘¶k×:""²ò¦z}§r[>¾—_`ÌÓBÃNƒÜ‰ˆZƒZ„×¥èÅ«lš55744jòÆ4±Xœž™%‘T|€†t;CØ÷ÜÆ}ÁPVœ—|âLz•]ê¹AíÂKÓtHùovŒ’ž¥ªyÇ—*B¡ÀÐÀÀª™Eê‹´š…¡ÛBúÍóÜ<_¶^ë«ù:o fj‘T*ËÍË·hj^uUêàYFÐh C€Æcͪ/«U•éö(#]WùÊõ¿ÞßT·…êîR?þ‰ÀlgdZVËKÞ7êŠùê,€¶ ChˆÖ¬úrëåû“ûvln"(LO¼yðÀÅ̱k¾­õàŒçÌ}•—ø›oúýç?òëæ5«¾ÜôÈݱ½¥‘ÞÚußrn:dêôA­(ÿÅ£„d1¦´åK%å;Ã’<ì[™ÈÒïŸþ=¤Õ¼©}Ú˜ÊÒïøþîŸ ?¢¨×ø™#zØXèIÒnÙ˜ªºAù½ùŸå/ë9£´ç›ö9ax¯Væ†qvJÌ™=§c9«Ü)Å=‘Ew¥õ‰¨õÈYîŽm›é§= Þç’óúóTõ‚õ»`D·Væ†<ÿùÃЀ#×_þãÇ æ!4PK[ÆŸÞõóÚoý3¢éМ?û7ï¹Ð¨t³Ñ¼ÞüÒ)þÚåïëX¿­ß¯]÷-5™òÑ(ÓÛÙàý‹o”™Ëò)MTµ¬(ŸÍ‚wÿ¼á§q÷e³YÈûù´¸÷Üáò­‚w–O±~°íǯ7ïôÞ\S5 ʯãÿëýM…oýUõâßs»g_?ôÛ¾þßöSO»Îø—¥ÒN)¨©¿¸Í#¿ßþ·vÓ¡pc—núΪª^,qwxun÷÷ë¿ý~×…DëòBÌ^€7S}g£G9U]ˆˆV{yÖÕq¹^'×ésV|úѪÿ÷çÂq#»›h¾o•aÔaœP-Ýìí+ÿh;¨j[³êËò?Šò?}o=Í–,?9x1g"ºæ/i»ØZ¾Õzqk±p…¦üG¦”<mZvyWhr>çù)Á;¯²SÔ´,çsæAf1ÏõeÌÌçüÃÌbYv´/ÑùÖ©Núv‡$dK¨ =Òß—:TÝ RJ{ñÍ/Ç®?NÏ“p’d?øÛ‡,')픂šú—v—ô:d÷Ö˽ŽªzQHÔ¤©µ•©¾8+!ôøfõ]hÜê{”‘]×NlÛ‡…GEDEK$Òz;n¿Çw ?¹ÿdü‹"‘u»®}fѽíî»výÆ46¨Ø¨¨ò«º˜!¨[W:<†±\Æô‰ˆ¢÷ç»ÏïÂx@ß5/ØMôú;î–[nGä[¶J4P]ËDDôL>Þ†åQJé2c"ùÖÎDvÿYé£vx• *§¬¼óÌQ½Ú[›ŠDĹLi§ÔÔ¿QV+ŒhP…Uõbïž@÷aæ ·6)~v÷ÒÞãw8aÚ¼©´0A$ÒìÔ÷-».Wƒo>~’ Ñ.]Æéicf(d§Þ?çsñ>Xý…þ7ëÏÉJG3OöZ&Þ°ñ/‹.jÆ#¢‘Mè÷ÀÇYŒ¥ÅÝ9wGÞ2çMÇMj×Ü\vjß•4"Zíå¹ãZÜdÇ6Í õÖmØ´ÚËSž$¬öòÜy#q²C+ ¡øåã°ýdz“ß@ÿ‰\B+ì{Ø–­ˆ³å«Q1qZ èÃXòéM>@_³ié[ŸV~£_¹’§D‰®”¬ JªåÑŸÝ_÷m”Ưäœç•_;¨´sæ É=¶ó÷¸WÙEb™¬åÿ­^\~‡Ê«©ïDtµd±Q¢†½`I×ýÿ¼Î9kÚÉÕsÖ‚ãwvkØM€z¶~ÝZ¯ÿ¬VºPW´6ÁÜÜl—©“ÇZ6kZeåÏfvÍ öߺñWïŸ|N'už¶¤)£Ë¹Ý¼²ÑÌS{ñ«§‰U®)ßXLs]º·o&¢×_M'³d‚õ“€[½70§(×ú‘ßö-ë6lªÉ¢ÖOü·þöÍæãáÆC—MУҬ`íúH´(*&®ü¶Ãù¥ïNb®c'¼Ãwe¨¯yø>¾p@kCÆ m½;”ß?VËCû¸Íܱ™>czÆ6væ.S_?ÈÉHy:Q¹ÆDqQA±DhÚ¶¿Çü*ƒQSÄü¾6†Œ¶0öÓ°ŸN}Û®¹@ ”IËnG`4@Šd òB]Ñò³ŒZ·j9{º[Àé ‰ê¾à\¿åtébîˇhÀX¢C×NH\4£­DÔ|øÄa"¦´&…üzÂp|¿ s],ô‹ÓùF‹y8¯¬¿™ÀQFôù=Š#úNUöÍåeŸÈdƈžßð¹îê5žNÔÁY€º ¸“ sB…ËÐ*F¶äï‹!¯¾tomAÅ!FÙraÚ´™ËGQAÚãË¿)Rúe¼æ¤6å1zþ+cAQFrÔ•=êëïx¾Ôså=¦¤G•z±ãÄÝ%ã—|a¢'É{¾—hqåËSSÿgÝ<>ÕLOüêñ¥_üŠ+ôZU/v?1u›þñ4sAQÖ³ðc»59!³³ë®XqttHLL˜ýsÇóüpQùÕ´—é—¯†¤öÿö}pý¶$Æ”Ö|­QCëîÆyô‰ðþé6}ñÅòŸ×o,zýbµ—çÚoV\X”e´þÛŸÅ%ãšD«½–xoØR¾Ô³nöö±QQö=l+d•K*°4¥¥gþÃÑh_ «fÏ_¦k; ÐTùßÛm[Û²…‰ž´ #åəͷä;ˆÏþqzü$·÷†˜±œø°“ÑUEµ+¥ó”÷-„’WO‚~ ÈïŒL[üÑòÑz c´åµÇ@-Ô÷<ÍYtŸõÑÄhïïnk;2˜oÐh`¼!0@·4yZ{Úi•–ߨUu=(G(JË=¿»º´ü´S5¾Ûr¤Š§94RR©L(J¥Òêî( ­-›åäæÖøÐ 7Ch€0ÄtN «fÚj"#3˦…5«þÓ2e2ž“›››—WãC#Ch´0 @w&=KÕÊ¡î<¨È  2(ƒ Ê4ž aµ—§¶C¨¦# ‹êûYF£G9]¿™—Ÿ¯Ie®×iô”=Ú45Ñ“å¼H¸{ãÊ…{5°«N“§ \&æ¾L¿xãyYyk×oTU¿‘éfo_¹06 /Ú¨¹úÎìºvêdÛ>,<*"*Z"©âý>ß%üäþ“ñ/ŠDÖíºöušE÷¶×Oœ ÐÚõ9 ôM-m»÷·p©Å¯ÛÎæ1Å+V{yVx]ƒÒúÚüŸU!Pš3€æ´ð>‘Ho°Sß·ìº\ ¾ùøI‚šš#›Ðï³#*J‹»s6œó&Žã& µkn.È‹;µïJš¼¼Åàq“Û77f$Ý<ì˜ÉˆˆsaÏ1îÃ{´°JÒ£Ž NeŒˆV{yîºõÌ£WKcYFì9Ÿ–Ó<mLewùOdD$²èì2vHO3C$;õþ9Ÿ‹±ÄˆHÐ}Ä<ç.6攟öðæÙ£!;¨bÇÕ^žÛ®;Ž%š½5bÑyùâO÷î9—”¯ßªïôwßëø] ]Þs·ñÙî— 5±9wÉÌÛÞ‡JÆ8Í¢›Û6ʼnmg~:e±íÍ‹ÛIä§/Ÿ5øø†`"úlf×+gý·>ÍÎ#ãÎCgÍ\î½#‹ˆMêõÜw÷Áøl™IkGg7¢ÝÂVµ£ªõ'¾;ÒøÊ®Í·S¨yÿi3l©jüö7‚è†g»|ý*ã×Eò×r¨-ÏTnݪåìéníÛµQº5ä×Ñæ½&Ì]ºêßKÿ5Ýù-——{8¯ì¹™#%qFôù=ŠúûÝNÊ•’¬ 9Ô—Ø@yᔢ‹>á ¹R*ÈŠ¼xˆ,ÇÈË‹ˆš4µ´4‰³“COî®¶ªUEèñ»ì™\ÀeÏCökv戚iV³bý*ã×Qö=lå?Ú@·ií‚\ÚËôËWCž¥(ŸEËÄq—ýã.‘¡u÷ã<>ÎŒþé6u òWV?¾ô>cùŒéË—;uûÂs£ÊcðŸ1&¯LD)¥ËŒ‰ä[yÛÓ]º··21 ˆˆs™¼ÜgßµÉCy8[™ˆS¢/ ˆæšUµ£ª;-Û;’h°Ò³ñºöD/5¨¦¤~•ñë( +¨ZË ƒ‚oÞ»ÿP£ëÓ‚´˜ËG=Î#ºMDñDNDW4;Т6F±jOÒ={@®ÿþí 9…)oþÕÊÙòr–|ëøÁ[œSSÛaËfLˆ®8ÔGÕŽª$9•¬i4Ñ–õIùÇ4ïKùúUƯ[ºÙÛ“8›Ê,BªPZe$•ÉÂ#ïîÙ4¦ªôàÓCûÙZê‘@߬Ë)$½!/?)u^з½‰€Do¹ÎW¸c7 'Îî×ÉB$ ¡Q‹NÎî‹5ŒÓ˜H"..,– L[õ›<]Q¾brÿnÖM„LȹòðUí¨Ê‘>§·! ›;ÍUws&2±î:`´çxYèULSVU¿ÊøuHlT”âYFQ1qò토ëêûBBbR`ÐŒÌlM*ÿq[êêì>²…‰ž´ #åəͷˆ‰Ïþqzü$·÷†˜±œø°“Ñj‘\ÚqÈeü¨9¬ŒE)·ƒ4ýÊ|ç©û‹ÆÍùÜXO’—yˆh޼|o¼éÄ©ïN5g=ðWÒšªU)ØyÉcòÜG艳žF¢¶3•V[íåɹD\÷*5.x÷¶Ðü*ª_eüºÓê ³³ë®XqttHLL˜i1 €z`i&JSñ-€7—HÛµkYy“–Ÿe 2(ƒ Ê C€2È  2(ƒ Ê C€2È Œž¶Pgµ—§|K‹sÓÂüO]ËdŠr.“ˆ s_¦Æß ¼ñüµ]”n‚Ƨ›½}åÂØ¨¨ú ÑhЭ]¿‘ˆHϨƒÛÂ÷†^û.HQÎI oji۽ϸ…K-~Ýv6U¹ Ÿ ù€Òœ4§#£Œ$ùOÃü‰õ*_ÆH&ÎI{zîçÓ‚~‹[h¸ û¶Šq¶|AÛA誆~¡„Шý$*Vº‘ß¾@ãFªÖ&hL¢bâʯ"C¨±†ž!”MEàY·6F)/”@ôŽŠÔl‚FE‘TÈ Zz† Ÿ‡ÀMlz¹-ù ×™Ÿî*«Õžè¥ŠÔl‚F‰@ÐyLV”y‚ô‡*ßÚk$å_®î&hS“S´€®kè÷丠IË·&E¿VÈ™¾©•m>cGÈByQ~’šMИÈd$Ï p N4ô Añ~ƒüô§A;×ú«½<9—ˆ ò^¥ÆïÞšÏÊï¢j4V¸uPW˜]wÅŠ££Cbb²ÐÀL‹ÔK3QZz¦¶£Ð.‘¶k×:""²ò&ݘ‡õ”A†e!@dP”A†e!@dPFOÛTAÔÝyÆ»6ÍšðܱaçüÃð\(ÓÍÞ¾ralTTýGÐh4è ›ŽúlR³`ßC¾ñÙÔ´ãw¢]Ú – ù€Òœ4× 3„V‹zܘÀˆˆÒŸ†œ(I87ìî6°“•å¿x|ë°_xcD´ÚËs×­g½ZË2bÏù„´œæáhc*˸{ÌçxbI…7';´²Š_>Û,<›1"Ytv;¤§™¡@’zÿœÏÅXbD$è>bžssÊO{xóìÑ U… Eö=lËVÄÙòÕ¨˜8­ Ët†0ÎNıŠånïŽ4½ºë·¨jÞÚŒÝî® Ë7Í¢›Û6ʼnmg~:e±íÍ‹ÛIä§/Ÿ5øø†`y…E­ŸìÚêŸBÍûyÌX6áöºSR"úlf×+gý·>ÍÎ#ãÎCgÍ\î½#‹ˆMêõÜw÷Áøl™IkGg7¢Ýª A»*ä¯å P z¦²5Q¼²rî,poDr—<Ù̺Slúóü“, å=`ÌôÏ¿ã²Ä²¬˜"GE…Ë>‘òoø\g=ÇË ×o9}=.+OÂI’ûðâ!²#//"jÒÔÒÒD$ÎN=¹[M!h}[ù¶Ðm úBQ¢‡•ÊÛù–­EõS¬Û,<›15åšXíå¹íÊÃIŽí›› 3’nöÌT×Å=Õ;jÔA]dßölEœ-_ЉÓZ@º¬AgÍéd´ò׸½;Òôê®ß¢R¨yÿi3>v»».@¬¦\Ò"ýËl¾ûœ¬úLže[©A¡Ë{îÖ!>Ûý¤&ö#ç.™yÛûP~…:œ/j- ð!¢¿¤ÎÓÍø>Åÿ¢ÖOvmõO¡æýßEhÒU;jÒAÝU!x-g€êhÐ3•[=*]^íå)ÿ‘¯ztg{#’ ¸¬àyÈž`Ö}œúò©=ØÅ·S ¸¬ -lPåcM ºèž+¥‚¬ÈÇ©Ãx%YM·*>y›1Æîù[.´*ÛrÙ'R~Ð>×YÏñU–—§èZùÑþc·“r¥$+Hõ%6PÃŽ¨ÚQ£ê2û¶òm Ûô=„DˆîQé8{Åt;"߲БDýÔ—·':RV~‡hh…cu"êö…ç¦ôŽE‰ösLóÄËoj<cúvDÕŽštP§aX@hЩšgG÷c•lzJ4èjÉšQ²úòD¢þD×KÊ{Unð у £TÍçÜv–!é/\±º¬Ä­ÿ-–1"r"*ý>¿·â jÊk¬ÊލReuT7{{gS¹‘EHj£A2z¾ëŸ¼`Lï–æ"F‘EŦ#±4lžCkC¶¸`=£¾üè}r™ÙËÆ€†Öýg¿]ùXÇnNœÝ¯“…H@B£œÝW¬á8A”¶cíúŠŸSESK6ŸÓÛÆ†Öýç á1§;©*¯±*;¢JÕÔA±QQŠgEÅÄÉ´€®kÐ÷XÁµ÷M9f©«™g¿J >±C¾©Ðo×E÷‰3>fHi‚6ûËßN ²Ü÷å©ns—¹èI2ŸFãí&V8–äÒŽC.ãGÍ`e,(ÊH¹t¨B…a£YüyåçMçíÊ>@‘aD´+¥ó”÷-„’WO‚~ (^• ª¼¼ ¯DPÿÜÒ*;¢J•Ôi˜~PW˜]wÅŠ££Cbb²ÐÀL‹Õήö2÷Þp®®Tõ¶²ú-fuÞ‘7‡¥™(-=SÛQh —HÛµkYySƒeT·>ríÒÊ„ ŒZ8Í@±k;œšk4€¨A2ª[Gó»»/tµlRü*ñÆ6?å~tB£é4@oâ(#Œ2€7F€F!@dP”A†ey†PáuÅ \]E«[½€¥A¿AñNbÅ%¯L’Ÿ•â÷×Í|å/ø§_c,?/ºêýã-Å‹ªP1yPÚl娄.ï¹[?>¹}«÷æƒ×òû.™i¬haQë'þ[ûfóñpã¡Ë&è©VM/ˆH¯Û¨‹zgœÜÚ¸Ó¹¨˜¸ò?Ú@‡éL†°ÚËsµ—çÊe3ÇvJ=¿õ&¹"}ÛÍs"Íh‘»?YéŽûÝNÊ•’¬ 9Ô—Ø@y¡Gw¸7"¹€Ë ž‡ì fÝÇ©¯_Ù«C' \fµá¼|¡šfË›2@tÑ'Ý—//I#`TõçGbiØ<‡Ö†$0l1pÁ {¦&Gg¹‡| Æ,hQe³¢:v£pâì~,Dµèäì¾X±iøœÞ6†$0´î?g9¥¾Y5½`’‡‡7~ÕoÁûC-jÐ5]¥x–&!Ô Ý¾‡@D{^NX’û­_É£ƒ‘i‹?Z>Z©ÿ@¡ß®‹îg|4Ì Òmö+®ò¶ƒr÷º}<¼ªf+D%¹´ãËøQsX Š2RnR´·+¥ó”÷-„’WO‚~ ÈwWÕ¬ú^0飣›Nº<ïá-—ÓjÒ;Ýéu…ÙÙuW¬8::$&& Ì´Puñ·–®î¸Ýû„¶ãbi&JKÏÔvZÃÿ{wU•ï{ü¿kÈ@&BÀ0D ¢hA” 2ƒ‚âql»âéîs›>÷Ðç>W}žÓÃŶ»mAFidPÐAS†`@ÈÀB€‰¤ªÖ}Q!+d¬$•T ¾Ÿ§žìµW­ýßµy±µ×®m³GFvKJJ®¹Ê»g) Ÿ9Q6nP wà/že´`~‚R¶œïÕzË/€&ðâ„PÿšÀ»gp/„@#!ÐH4­µítìC£v} ¨¸Ø•Î æ'8ÿpØŠó³Ó÷®ûú@qk<ú`Áü~JÕ+ô‹‰©Ù˜–’Òú•Ü2Z;!D÷ŠêÕcÿ¡”¤”c6›½ÁþÎ3u“Hï{¦Ìþù¿;Øò5›T˵f¸ÎOL³Z-÷ »w`tŸïö8u:Ó•·8JòOîZ##牔ßñW..˜ŸðabÖÔÁ]CÍå—Ní_¹öPaÔÓ®”yиièj¶]ÉJY»jÏÃp^¸pþË•¯3 —^(/p.¦¤¦{¬ oæ±g*‡„Oîü…owî¹|åjýM>Á½†Lû¾‡}¦Ûé¥ï­Ï–NCfÌzuâá·6Ûëi7Ç¿8-|ïŠÅë2í1cž|~öá7W¿±p³Œ¼NµêÂWïhpÀí+’Ï•(GINâŠïAêoŸgݶâPf¡]Jò“7~.='Ô1*¼@Ì€^Η§ ðn»†à”{éÊöïöžÏΩ«ÃÍßå7|›r¢þ3EdhýíQ"ý~0Úpa\´yL+p %„’’Ò]{ÿá¤jü{Ë” RêZŽ7]&²«âÏ»EÎÕß~ZäÄÛ‹RŒêA)%JIv´Aýbb¤¼@ªÌ,"*4‡fÙŽCÉG—¯\“Ú¤x ";•Ì‹ëìoK`ÄÐÓª®zpîÝþbò:w„JÝ\ûÚÄÒIs†D…ZMbn×9jÔ´çœí¹"qíˆ^ -%¥ò·ŒRRÓ/Ï–àíZûBfÖÙ»ó®4gïïíùøÔ×ã}EWNúXú>U¹jiöÓ_j¶]>½ëÝ ¶Êëµ¶Û¾]²*~ÂCsã:˜®çeÞµÊÙù“äÜç^~m¬Åà~eoÁíîbDG÷¯\ˆœ•uÎììÁ‚š£®ß â·‰PMX°5·¡_и…)›=2²[RRrÍUþ-#m €fîØ1¼r!"¢K~þ5“Å׃5ÇÎ]‰jÇm«¯¹¸¤ÔÓUxŽC…„_¸p¡æ®!ÐH4„@#!ÐnÍ„°`~‚§Kh"ï­·‹§ ¨Oåé²ÃVœŸ¾wÝ׊ºz¶èS“k=q÷Èsš›°§·ð#¥ûÅÄÔlLKIiýJnm:!ȳp“Hï{¦Ìþù¿;èÁ2ä–>ÛöRÕò@­™®kë ÁÉQ’r×9O©ó ó'n^øþ)Ã¥|6ÿÙpÃGn|Í_yúÞù¾G'Çöèh*Í;{àÓõ;®:ûÜ7mÊð¨Ží¤øâ©ƒŸ®;”oÎ÷¾¿ódÍþ RÊ]ïùì‹÷ìøï$ñ›ò옠ï–þ%%[: }|Ö+S޾µ¡¼žþ 2Ç¿8-|ïŠÅë2í1cž|~öá7W;W=!Þ'½¼×ì_L®×m‹ÿ”¥îœùÚ÷}þög‡gº^úÞúlé4dƬW'~k³½êÈ¿œÝwç–õï)(’€;ïböó‡Þ\’ÿÆÂEÕö´žêQëàõ|>“Ÿ‰÷ÙþÁŸæHÇ{¦>Ñ«ÞÁë¿®qêÚ‹g&ß•³zÙ'ŽÀn±£¦ˆ,«k‹ÕòÀM™ÑÖïT^0?aÁü„ß¼:{|Ô…¯Þ; "%Ÿí´œâ¯”ˆXgu.\y®Ö7®\{øl¡]%çö­c¸³qFcÇߓΕ(GIÎÞå{ŒþÖß¿AÓã¬ÛVÊ,´KI~òÆÏ¥ç„ÊUu:ß&E'6FÐÇÿLÏ/wä§n‰­ì°}E²³’Ä߃&Tyá_¿ø>=¿È¦ÄVxrÛ* רêQÏàµ~ 0¶ýãpv‰r”äî_¹«Éã×5N]{q]Ä·}XX µ¼àܾMËêßhÌ€^Η+ŸêÒÖ¯!Ü|eÀÃH^ž7ê…n²èœ1»ÛÕeÍÕeܘlcņáãü;Rdµî’,2¤þþ Šé÷ë„ÑF-%œ7œÕ‹Hö¿ ÃZÙ!Q÷MZííêŽá3ãû÷èèo5‰ˆRŽÆPz¯õsè!ò™~÷‘û›6~]ãÔµ+>Ú=õþŸÌÕ1°<ûØöÏ6Sõl”iEnÑÖB­Î/M Nˆ•?ÜÑ3wõÕŠ”(% MÁ?#2\今¥Á"µ_pÝi‘o/JqaêMÃDn|‹~wÍJæÌ‰+\¿rqfÞµR›]uúíoæ8Û«íiÓ ¨kðºd‰ ù¾bé®Êö2¥‚”ºVqº78~]ãÔµƹƒŸrP)ißëWgÍÜplUÍÚúÅÄHyT™YDTh޶>˨VFùÖMŽQOîu棊9÷¹"qí>Kþ,Mxjp71ùwþôOTڗͬdmbé¤9C¢B­&1·ë5jÚs®¿÷Á¹wGø‹É?|èÜ*usµµ"¶ò²Ò2»)¨ë©3+Û«íiÓ ¨kðº¬ùAâgßá'&ÿð¡sFV¶ïT2/®³¿I,CgLkpüºÆ©k/^Ÿ:´_¸¯Ù0+UûÕƒ´””Êß2JIMw¾\ùP¯¼† "IË/M|¾ð¿ÖUL1ú$9÷¹—_k1êÿ)ÒÒuK·M›4ëåü¥$÷Ç]^WÖàe‡úÙ¾]²*~ÂCsã:˜®çeÞUË—ÜuYš}çô—F…šm—Oïzwƒ­Z%nþá™Gçþ*Àb+º’‘¼Jd®³½ÚžºX@Õç9¼±pQ]ƒ×¥tý²íMyòÕx‹íê™äµ*r’³ýûÅ{{>>õõx_GѕӇ>–¾OÕ_|]ãԵϚôس›Êòs’Ö×÷Ùrû€»ÑÑý+bcge3û{° ©/,è½øÍž®£©¼÷¹ J _0?äÍ··¶‘qš&,Øš{åªG6 Ð(›=2²[RRrÍU^9ËHIøÌ‰²qC}÷­Â½^~¤O×@ÃÔ®ó°§â$íŸ-Äûf-˜Ÿ ”-gç{M»?M³¦¸ÿ´y$Ì·ìrVâûëªÏ‰jýqÐB¼u–ÐÌ2·¹[m–€BB ‘h$ €FBpUÕÇ·ª6ý<e‰;}ø€îí-Žk3&îüæx¡s•uÀsFFG„Zly9©»×}qÌîl¯ÿYÅγ|å°•—^ºqtÇŽÄqåMæ½ÏNö ýbbj6¦¥¤´~%·Œ6†¼2¡Ï¡M+7e\¼n ì{ï°'äøbQc~1)|ϧ¯L/i×óžÇf½49ëÝ ×\zöÖ )1ù…õêÏ£ÿòBè»ïo)2œí-»3hÕò@­™®kÓ aŒ¯ümÇ©|ùž›~dKúg{ijm«íL7D$?cÿ’Õƒ<ÓiÃ;¹.kˆ£üZî‰}[O”üô?žë¼å‹RåË~kèñãG Šö7Ù .ü°uŶ4¹){(e4nÚƒ:‡šmW²RÖ®ÚsÁ0DD)ߨG'ßÝ)ÄT”±óG;s—,œÿ:7õýÔ¨>!~Rœ{òÀ–5{óÜõYÝÎbôÒ åÎÅ”ÔtàÍÚtBØQ&OÆ÷ß”r*ór™ú4}‚¿l:%ú¼ýÔ7â?Zäӯޝ#ŽYUµñ—³ûîܲþ½3EpçýOÌ~þЛKò«v0Ç¿8-|ïŠÅë2í1cž|~öá7W‹ˆuÜóÃ÷®X²6³4xàèÉ"ËßX¸¨Ú,£g&ß•³zÙ'ŽÀn±£¦ˆ,klͨUµÔ§,7ëÄ®u;Ž•".rú¦ŽY"ã›´…L‘‡«5-üë7þ,<¹m•į!¦ÇY·-<”i"ùÉ?Ÿ4‚Èj™1ؼsáLÃÉ;öÕòZ·w]Ä·}XX`Éù‚sû6-kRͨ]e*àê@s´é„`”§o_Ÿ¾]DüÃûÇ=:ã•«ÇþxXDrEz‹¤éŽ‘"—›´…"—ª5©;†ÏŒïߣc ¿Õ$"J9ªuˆé÷ë„цT»ï¡§Èú†¶·â£ÝSïÿÉŒQ˳mÿlÃ1Õ¤²Q ‚€[´é„ •ä¦n_3cøS"‡Eä‹y"JÒ*¯#D‘’Ï›0ªq×)^[­qΜ¸Âõ+gæ]+µÙU§ßþfNµ§EN¼½(Ũ~ct†È0‘77*¥D©Ê)RƹƒŸrP)ißëWgÍÜpl• yúÅÄHyp ÀMÚôó~1ëþ!½Bý-bò î3bºØíç—÷y|ö¨ž1‡ôòüã>Ç–æÔ?TUJÖÀð¾qc&8ö}p±ÚÚ[yYi™ÝÔuÈÔ™5ß¾6±tÒœ!Q¡V“˜ÛuŽ5í9gûšdû¨§ïíhkèÀGæ9sEâÚé,ñúÔ¡ýÂ}͆Y)®¸AZJJåo¥¤¦;_ž- ÀÛµék¶?2jÚ˜Î{I^öé/ÿ|Ðy{²qíëßo=gì“#ÚûX ãøÆ¿¬«òS§UmVó7LÌOPÊV^RtùBúžeïï+®~)àÃÍ?<óèÜ_XlEW2’W‰Ì­ÖÁöí’Uñš×1Àt=/ûð®Šëå[>øbÂä)/Ž6®eìßtLDD>IÎ}îåׯZ g%Ïšôس›Êòs’ÖsÁm¸5À]Œèèþ• ±±ƒ³²Î™ý‚=XPcE=ÿÚŒÔwÿûûêw õ ¶æ^¹êé*|@÷öǵ‹™Gw~s¼Ðõªê¯¡Áp£~115ÓRRZ¿€[FÛM•çÙn?çòÊ„>‡6­Ü”qñº5<²ï½Ãžã‹[Úˆjy ÖÌ׵݄P“R ó'n^øþ)Ã¥|6ÿÙm ÿ2ë7¯˜˜5up×Psù¥SûW®=TPÑÁ8÷î1ù‡;B¥nv6®M,4gHT¨Õ$æv£FM{ÎÙþ‹Y÷éêo“OpŸÓÅ^q1`M²}ÔÓ÷ö4‰5tà#ó¬¤ÖæŠÄµ#´’´””Êß2JIMw¾<[€·ó¾k"’´üÒÄç ÿkTÎZš}çô—F…šm—OïzwƒÍù¾íÛ%«â'<47®c€éz^öá]«œ?8ldÔ´1-ö’¼ìÓ_þù s ò-|1aò”G×2öoªkrQ¥Z7úIrîs/¿6Öbp¿rkâöw1¢£ûW.ÄÆÎÊ:gö ö`A®P_XÐ{ñ›+=òóAüf‘W ¶æ^¹êé*߯{‡?¹žw!ãèþÝ;Ò ›¿kÀíÆm÷!„„OÿØÔñaÚ7ö½æŸŸ=¬w‡ckÿöß>:־ϰ)ÿú ©ÁUUÍuOŸN'Ö¸ð~ºë¼oìÍÝà¶äæ;•»uí2g攑Ýõ®ÃýDdö ×¥üì¶­"â7||ƒ«ªê ¢TÙsh—ÉÍ8òÕ'‹›¹#ÀíÉ ³ŒªÊ½teûw{Ïgç4ê]½EDäDÅRºˆˆôhpUUîÍ~ixÄØiÓE¤¼ û‡o?]¼Q%qcB())ݵçÀñNªÆ¿7C¤¯H_‘T‘^""’ÙપŠw¬þýwÝ;†w45>jÐäIëol|!ÀíÎ ³ŒìÇ¡ä£ËW®ImR<‘5Év™4º‹¯XºÇ?""¥û¾lpUUÿúø}ƒ»yg3~8uADDM*¸Ý5÷BfÖÙ»ó®4gò-K>—GGÜ=ãWÃÌöâ+§öïùt›C £þUU}xD=sxëºÄË"¢”¹ïîëÓ)Äl¿zîÈ›ö]2*¦‘{ü+jSt¨ÊÚ—.]FD›äxE1#†Ç%Ÿ½Ú·sŸÉQ|õ\ꉜrè§]D” ¸wxŸ“{“®†ˆ(e<<6sï«n½.ÑÌ;•k¾×ÙÒàJøùKŸ$7 kˆ¥ìrÆ¡õ[RŠ ßÀ} E¸ejõìÎ""ŸfU™lõˆH—‘Uç^‡ û'÷¸r|Ù¶|sÙ ›vïGï'_­¥ý¥)w^;¸yŇ‹ßY²êÛ ½'Ín_¹jNø¹m«—ýéo+Ö 6÷ng£åy‡e|ý¥ï,ûâ¨Ï#ÎFÓ}Oïþõ?–¾³|;’Ás&¸·ø&SÊzw°:y^ä|º `­ò©Æ]=‘ràû¤´K!Q¦Û £(ùŠõ®î7–Íýl?¸7´WЬÎ[>^ò§å_õ¿ïÙ1ÖÖ/²®!÷S"n¹á9ép(Ó¯“çED$¼êøóŒ0 ㇯¾?bwn¼ùÜù úa"g”¨…½»bë? Oï^/ƒÇ(µÖ¹¼îË£Ù†!bÏN^/#æ(uDDÆ4ïùsÒYÃÉ;±³bžÒøÁÖïþœrÎ0Dòýsó#/?¢Ôz×kkAþýìi— ÃËÇí½ïò—C¥k2SsŠ ÃâìÔ3½‡ß)§OÖß."öŽáýLgÓ†aêXv¬PÄý ¡9ŸXÂÏ_ªu4WÐîµG.†ÈÅCkxùaõÍ—M.Ã-H %¸g–Ñ‘(‘;E¥êÁºˆˆÈŪãÿ×·Ù¿Ñoö“óÖü÷c­x‚réœò²ÈJ¥×|wDÜä‘ýº‡øYœ“ˆ•£‘ŠGÎRdVgû"[jl¯§HŸ—vÕaotñlBà[–Zà<Ïû¡Üo@ *r®ÊÖ½rDºU.ÔÕ."†‘{¸¤gl€(’Á¥‡KZ ¸v4ëòÇwÿRuñ__ù¹s´ž. dÝtTäG;hÍ;ÙªôqŽü¶‹<©þOVE‹#ò‘œUÇÏݽé?K^ðhϘ©ó^ôýøoËš¿i7ÚV"SºÉûç«·O›~oÑ—«?:›WXfs¨ð_¼:³þqΈĊ$Þܘ%òã»9ÞÆ¦Ü(2È*滇Ð-Ñ¡ê@žaˆH‘F'ý“Suµ;½ì{o'9Ò¾èXYÛßz¸r€‹ì¯øs þ <‡û@KPJ9šÿº¼xçY¥‚çM|¨‡¯EIûƒ^Ÿç¯9kç;;ÜØ˜£ðàÖÿ½:­P,ýÆÏ{í>¿fn×Õtív‹ìOÒ|¦MÚ=È¢ÄÜ5vÌSÎöv"å奥×m¦v]î~dºÜMn¶rqóQûOÜ­IYÚ÷½NEãÁÒ±Óbï²Êäß±Wܸ§k¤n·¸I§>¦¢¤Ý{÷U¾öš¢;U¬ì9 ¼YÄܮˀH¹ôcå›êjw2§O¨žý¢B Rí-Q²‹G³®O²®–ˆŒ˜6°£R>a1Ó†«´­M®Á]¸†Z†;NY •ú?¿+›:éîáÏçg¶—^ËIKZ¹aï~‡qÓýJ‰ˆíø·¿]VúïÿÓ3þ©ÿe]ýßß^n~ qí̬ðŸÙ2rê¨Ç†‡ú: sOZï|×Ç_§ÍyhöËí,åÅygŽ|&2[Õ1AȹX¶mÙ×O÷äO‚ŒÂ¬¤/~PJDÊw/]7rüƒ éÐÎT–áØžÕu Òšº÷6ò÷—W T~<ß!¹D$åZ‡~ƒ{ú›íÅW³üè߯×Õ^)÷hqߘëߟl‰{ÄÕ£Y÷›kmqå­Ìé=ñé‘íͶ+»?ØZ¦<}…ĈŽî_¹;8+ëœaqõÖ€ZuŽˆ<~¦5XÖBbzøŸ9—Ý`·ž=×Ï>k!¡A®ìæÝ"\鿢ÃãvïÝçz{UªãàÁÉߟnÜ]¬ßSG󗯽òûwÞmþ8.ÍJÊfŒì–””\s×@KPîz¦r›æ¦Û-à ¥Úõ’´½ªæµ7mÀcG³­ý/"!€–ÑÆNzZH[;·»U§”½(ëÐå–œCBp"!€À—ë¨C]S‰êŸbÔà$ïõ?ÿïO—P ¸ŸjÞMŸžåc1\,Þ«wÓd¸º›^Íl6q4‹„Üχ¯Å¸^Þ"¿JÙÒº´÷+-½îJO‡Ãa6™lv¯ÜM??_wS)e±Xl6[K—Ô‚9šÅó€û]/)éêc4ãæ=ÂdH×>í}ò ®¹²›ÅE%þ¾>†§›²± Ãð÷õñµZ]ÝÍâ’à @oÜÍà ÀvþþÍÆâp¿k…ùBCcïlïéBG))++»tùr¹kß—_-ÈïÚ±}pKæ^ÎÝÌuy7ó¯]ëÚ½k—–.̽»›·ÉÑt ¸ŸÝîÈ½Ô ,ó0vóVr›ì¦+˜e@#!ÐH4„@#!ÐH4„@#!Ð,Õ—-›Íæ‘R´‹¥z¨tÓ5„¤¤ä°°PÃ0Z¾$žaFXXhRRr­kk™eØÂ%ð˜úOø«'„ÔÔÔ àà ®$·Ã0‚ƒƒBB‚RSSëêcîØ1¼ZÓ… 9]»F´+++s8-\$€Ö`µZ:v ³Z-GŽ­§›Ý¿®u±±ƒ[ 0žQ×½UÕ—Ünx„@#!ÐH´ÿ–ÔTSÞIEND®B`‚mistral-6.0.0/doc/source/img/Mistral_dashboard_environment_variables.png0000666000175100017510000003353513245513261026656 0ustar zuulzuul00000000000000‰PNG  IHDRþðnp‘¶sBITÛáOàtEXtSoftwaremate-screenshotÈ–ðJ IDATxœíÝw\Gûðg®rô^T)ŠÄ.ö^b×(v‰‰š^MÞ_Þ”WSL7&&±wÁ^’X°*vE)J•~p}ç÷ÇÇq\îù~ü$·³³³3³Ës»³{»$$¤= „²&œÆ®B¡†Æ34#,,´!ëBÈâ¯ëM'Õ|ÂÂÂØ”ÖoBÕBÔugé„þ°°PJa0ê#„PsÀáBtÿ¹îÐЮ÷B¨9¡”€wvvŽ&Qw¬_©T/B*•–——€­­ ÑœQ „j@æd†>Ÿ«R9à*—+©þòòò¬¬,‰¤\“ „ˆD¶ÞÞ>¶¶¶k B!£j !O3ìSå¨_¥ÒsÈO)ÍËË-,, ”r8 Ê·ŠT*MKKuqqõððÀ3„ªWu È<=Ÿ bHH'%##C.—r8Æ~P\\(•JZµòÅèBõÄ‚¹JègFgvaa¡R©àpÔƒD„gg'W(,,(**Ö|[(Š‚‚—Z´5[|ŸË– †3?¯ù;KÑØ•yN˜Ó'6_ýr^ ¤lüä÷;R3Ò‘u°`@6vÔ/—ËÅâR.W½{{û®]»:99j2—ܸqC,³“bq©­­­@ ¨kûP#u|íËùº©)}òÛIÍKSïR”RCW‘ž'ê¶h,¿õäËû9>ûûÛïŽg+Ù4ž÷È÷Þæ^rñç•Ñif~·™Õ'´â:y ¥£æÏ²ÙØQqq±æ´ÂÖÖ6<<œÃáH¥•B¡ {÷ð«W¯•••kÖíææZ§ö¡ÆUíÌ(eªŸšAöôøê×µN EÝB•e](éîÞ)ÄñDæ3×1¤“;€äáL™ÙÝcNŸª‰ñ¢fͲÙXè§”a¿a!r¹\o÷îÝcB(Uá.Ù´1ìÑä£õ®¹¥sä+ê´ü«EAšIÙ³‡—ï9|-GNA8ûóe=DÙ¾\}*OÅõüÞ'ã}Ê~^usÌ‹‚ØÒ êâl¢TàÕý…Çô tå+ ÅÝ{àJ¶œ]—άJªâÇ÷o:p½½[çÚyÄ”":y‰@–ÿðò±Ý‡¯åV_\»duÛ­ü®²†• ͸–"ïèÚÎáTnÀqnê O¾–!eZY²ªäñå[c®q#WŽ×…›0‹dƒwøÂáñxì·Ã0ÚªcÊãñ5 Rƒ§äè¹§ÞvÉëÞýI÷È·b³²çŽT^¦ÂaÇ0ä™7S½;ºwîÐFÑÑ@‘z3SNµ*‹kRµ’´“ ¯Kk#—(€+à@Åguö™…Ù1Ÿ}õwnÕ*ŠŒ•L)Žîɱ†ªèÎ…GL@P`ßö-¸½9À<<§XE)ð|†.œÚãJnC­®Lášµáp¹@¦Zi› EWÖ­1~ôà®G.l×jÓg¿_)ƒÿ&ËâÙØÍ|~å\™Lfè¯ÂçWÞ?¤÷'Á¨ÉÐŽ-ºÃÔUgU¤å©WÒiÇÀCÆ;:ÐôkËj£•ª•,Ï{ [ú |ïdŽ× !þ—œ'§T`x]`ÆgY^Rtnå=vîØ¼§ïd–È5QÏX+äå w×¶.·n(ôíï´øÞùGLPpИɮn`’.Þ+a(àºø»@þß?|{ðiÀë«_6Ôjí5š\ (ùt3Ò²’rd”¡Ä9ÀGtëQ9c¤™T–síÐúÛ‰#?Z1Î+¤GK~B!þ )³l@6ú¹\!œŠÒ8ÅÅÅÕWÆþ¦@s&Â0ŒR©4¯!蹤ÞÄÁK¾[£•úð÷·¼Q%Ži½ÏA=Yú0á ú:´²€Œ+J™ª¡ªæå³ø#‰£_ó±ü#Øy’Ä#ñyJJù†×f|Vå]޾8ly_ç6ïnn+dٷҙРû/ÞC“_÷Ô§ä^ìCUp;·`WÕƒó÷ÙÈŠüGyÐÞËmø[_ ¯Ì¬¯ÕÚk4±Tð’Õš­Prñ@|¾Ü#»Ú9¹gñ£·¼!1ÜLèòö÷¯kÊ?« Å Ô4X6 ëÜáSåŸB¡â«ñll„®®.쉆Ã0®®.B¡WA¡Pê‚ÿšÞ?};Šö,œ“LÁ¸§êìY wŠUÕóèîfLqâæo7K.T€²0%vówQ×Jô,h¤ýŸ©øÎÎU?쿘œ'Ñj‰V¨ ã6mŽM)QÈ ’’ ”Õû‡–Ü?ŸÄH©ÄÞ/eØtEÖ?ëwŧ‹µÇT ´Z+ÅÔRìjJÒþòͶ;eŒ,ýpÔÑ›YeŒ\Ř×LæÞ:ôבTYcïWø¯nÿêµUyrgVV®Î¼ÈÆÆFk—¢R©L"‘€H$²±j_;–J¥eexç0BÕ‹:dOýOî¬~ú — …6ÚcL|>ßÁÁ^;».…B)—סQ!„Œ±`@6ú ?¿ÀÉÉÁÑÑÁH)%%¥ÅÅ¥8”ˆBõÊRÙà}ý”Ò¢¢©Tæêê,V9¡ ”Êd²‚‚"©Tf~ÕBÕŽ¥²Nè7ø-!•J33³ !€l’J¥r¹ôB¨Õ= ›ðÑÉÀ^J®]uBYJ]rÍB?B¡fÀôCFB53:GýU „B G÷Â3ç†!„šª¬ò0ðA!«ƒ¡!„¬†~„²:úBÈê`èG!«ƒ¡!„¬†~„²:¦ŸÜY]yi±ÅëB¨vlœjºHmB?´m_»BÈš%Ý»Ãçrj¬ ©Ý±x-C?cà-ï5ºû÷îDôøìٳƮBϯچ~†1 ¡FBwQ Zºdñ¯¿ÿÙØµ@–„¡ßò–¿ö •_Zóç ¨xÎò×^ùeíºÆ­•Ua0ô@æ-ïuhÍŽt÷™/OŒ[·>…è] ðO¾ÙÁÐ_/ŽJ{ÏôIÜ–Yù‡„=ÖŠ¡_?úpwÖÈ™TÞ¬òb¿ŽÓœ³¢2 ÁÐo-0ô׋¤-×F¾6Ôæç¿Ë+þ–Øã9F ëâí`C”%9IgöžMo½¾tgbæØŽÞ¶Laò©‰^“Gwöv` ïÙqâ)Jyí¿Ð¯§WUôäæáqÏ ÿ•"†¡¸‹Pºû ïQüëÇäš$ÁèþÜ+kK)ðôîŸP±¿õúÒ~þU³ f÷Ï&§Ö¡/óC6¥.Ÿ×õį×Õ)l½:)èÂ?13K$`çßgÆäÈkßoU_ŸW6­KU¶~ñÕñóüOoù#i3íåɽŽýt HÄâ1î—woŽ~¢°ï0hÖì‰7¾.o¤Æ5”Qá.j=SöÖtOeT6‡e†þzÁ0 @òŸ—F½;RøÍqYE Ж}'îàëf/âs€j IgQʈ »â3!|6C@Û·ÞêO€TYÒÇúï9ïNÆE—ƒÍ´þpnµ˜å7²ý¸Z×FõuÌ_‡ûúq»Ãö*6êá C¯Ñ¤LŸÑK|`ó†´B±LÁ0ž+>œ§éIí.­þù1ÀÃoVßÂñSó0*Š»¨QqۊߙߒYCæïÚÍP fìŸrJm•*1û=A[îŸõFYϿ£þzQÑ?E»ö”2×S“b P$—J$r°o6äE0p0UýóþË’gö,9–˜Z¨²ñìÞwعýk¬9Mø˜”¾áÃ[½#À!é‡ †Q?ËËäþKaVwÏWrö-ÆOÜ?ë úk Í€—yë…¦hÒŸg'¼?¨"å¯C·Œ^ðŽ=OY–Ÿzm3ÀB½WϪVý³f×°ÉÃf÷q·ãÈ ŸÞŒÝ„›À¼Ìkštÿq凣àä—R  î+“ûçÅßÎûϘöÆP¦ìYÊ•¿ ÝK¸ZJÎd%‡«þæÛm @ѾB*cãµßbß$$Dý4ž°°Ðôô§æ¼–½¼´ØÖÞÑR5@Ȳ òÛ·oŸ““ÛØAHââ"½Ïð)š'äÛÙÏ„Ëáòù R(hŘ›B*sÞ$«^`yi±9O¢JUëÖ-Õ7â€jnœ]²²²»ÕŒó&@ed/[æ vk‹êiuµ ý¥¥%–­B! ¹L võV~-C¿»§—eëBV¢8?Ïd•BÏØŽÕßd„BµÄ0ì=ý¦/¾Ö†~„zîTÜ"…¡!„¬†÷VA½–¯eG!«ƒ¡!„¬†~„ž;+Þ·Ñ—ª]i–Z;ªo–ëg·e iÙ³ìÇwÎþ›W9kå7«ÙÏ”1¼ 7W}}8D“ÊÏ®ú>^;¥rî/ ëäå,â¨J ž<¸òïÉÛêˆóÛ}±_{®ª0çÎ…}'î) Urƒ‡Oî×¾¥³=)ÍM½æôýзۭüfµùû"›™­jíZa¨bz{¸<æÛh½YÉùÅ_öÕ¬ÂPo˜¹itRôΪÞ|3;Ê:UïCTtúÙø$Ò°Øeޕ߬¦ÀØ»ûµï1zþk.¿þz²¼ÚcüxB!ñ „MåÜ«õsàÉ€Å~ñfèf§‚¾Ëçw|ô÷Ѩ}YE*[‹ .=#áöo@íF¼1ÎóòžÍ»R%"¿n“§¿:&ý§£eú¾t|ðµ;§äÉî­Ûuë5î¯ÕT»zCØzw#{R-Za¤bÕ‰'Œá}T¡~½ôž_ùÓmã½aÖ¦1þ ¡&wZC,y‡F!ÎMN8ò‹dÙG ½O®ÉÑÉ œ±«O’°3…°­òýpw×Ç{{¬í×GÊ9UB’ûü>‚ëÝc¥ù·OgÜfgy/è¬Ü»ú|*EZÜÆ=á+æ{ýUÿ% áÏØ‡% }–zãdê ‹µYK-ZQ£ŠEÝïðJ›£¿¦€[äÌÍ࿌e²7Lnšæ2½'Nêèn ’¼G »cJ9V¼ÿîÆ¸Ô ¡­\¸ògâwF«Ó)Ãë8rêÀÞÎ\eAƵè]r+òÿ›4¾[;Ž´0ãêž½çŠ vÃf hëã$‚òÜä+G¢ã Ø“$ö¿lôá;µ4z@GGGQ’}ïä¶¿».a×ÑS"Ú{:‘²´+¶Ÿ«òè!êÑ{ÞÜÞÌ…-[/ëyeeì{OšÚ'ÐM ,ʸ¡yËŠÁuU¯'›ß£÷ Õ>§êëýih¥+Þ÷¯‹)ãC}]E¼¯¾ýÎPµé­§Þ¢ÓÏÕ»]ûDVï¶£ŒcŸÉSú¸ò•E7·é5‡Ío¨¯šz¹¹“Þ< £‡l«’ÈØÏk¡:¼Ž«úG:0K+B$QÛúÎ’G¿¿Z¥œ±NpôŽþë£Epä‘Ö¬”“`;`»ÞúÄÊ!rpÇ#7eäKS_¯E+jT1Å3vïå|ó¯ HßùÞô{_=ËüÞлiš=á„ÅCÎFýv-<»O½tÂͯ©<æ·J‰Z·/<»O™½tìõ¯Ž)€3äÕ ž—¶®ß›¡tèŠv='L^Zt÷§DÐ*ÖRjÚ #Ó+#ê©Û«}#À'sº'Y5éÊM£¢”ÏPEÅÙeø”{\Ó6õØW9u  §f"¡2= û) íûï "š—NU~%§jNU9bBÔ?ºÙ¾õÜø“û»Û+²îžÝqø®ž¯pÚªßÔÁ[»W¾ûM÷8¨¯Îƒû¶†„õçr úDW©ãë2TO½c¨3 ­îšQgmzëid£˜Io3[ì¯Ìr` ‘:4'õúIçáP¾K;…23D ˜÷Þ M Ò–ùéÖ(s>êá/u;²F“r¬f…@Ò=«8.á§ÇÓ#@²×tͤy÷Ïì‚^ ŒEغ¨Q+j\±² Üw@âW¥UNØÌï íMó /ÀÙÊ™=ÒÖ²©Êè pA=PyÌÑà¢úc(ÀöÓc€ß|wËì±A’hWp£j°{;þ­SDûÓvÙÄÁc^á(EI~úåÃëÔ³ÄÇ~<:ìÅ‘óû:qTÅYwþzÌÀð×-Õ°þS‡x9ðT’¬äãkã45ѳ¶Ì˜†Ù­0R±1Þz7 Ȭ;:ê…á³z»ˆ¸*IQæƒãkŽËµ— )AgœJ;±IŬ?=iÒô¥ƒD yöèÜo12M3£²‚'¾2Ø…«ÈO‰]{PÁ¦«Îü¾gð„¡‘½Ýí8ò¢Ì›çõß> ±-Íqì”W&;räÅÙרw_Ï]°ôá<ÂvTÔÑ»óGÏ{ÛŽ§,+H½¾ @}P¬8¾îØØÉã÷w$¥© 1Úc#„yýË Kç¿ÌÙüG¬ž<ùÁõg¦L‰\:Œ§(ʸ±ZÍQ7ÊÀºôÖÓ½c¨3 ­ÔÌ:›ìOCE§Ÿu&M’ØpvÊäÈeÃxÊ¢'7öÐÖk×WMN-_Ðèäf𚪞KÈÌWÇÝ^µº¾FWª ü½ÒF™~+>pZõíÑú^Qq~ž9¯T4Sí^ÐXrXñÞë‹*⣮Õß*B¨Ž^ÞÎÛŽ‘OÏÙ½áÁÉÆ®N©Ç‡6¯úö{¨ÝF­5ƒç 4ƒ& ‹Ã½¢þì+ï4qÞ(7¡"?ýÒúhEýýôç¹R>!„ªkæ>!„žOø–.„j|!Á•¿WÓ‡Rzÿá##jC?B5>Çá†a,ú ƒZ†~…\n:B!óܾ—ԫñ~„²:úBÈê`èG!«ƒ¡!„¬†~„²:úBÈêXæ¾þO?Q¿‚…Q”g?¼´çpB9éòÖG½¶®ü3¯â ó?@ì&T_ü³/W~úɊϾ\©S¦&…ßiä¬[¸p•…Y·cw½­Ð䡲SŸs *~ ¡YÊHÆ—R×V%ç¥ÄïÛw¾˜,M§-f&VOgå%¹ã÷Ç\*&&óSF¡ŠŸe¥Ü:}ârvõNE•4=©³²ZûIûGED®A=¦Í|½_ÂWoü~{âë}àkõù#úÜüévŒô;Y=˜BƼ;ÁëâÎõÛR$vmzM›ùÆø´o–ª£ö>éàWý/ý–V³ÚYJ]+ž½o·é ^z~å)“¥émˆù‰št"töï>}ÎÒþ—Vž7™ŸŽÐÁÓ¿C¯q‹ÞpùéÇcb«xæTíúêEÈjYxÀ‡J œÝÜ>@$‡¶”žïD€:ÍZº#FZ›ðÔbqW宿Î>*QPEQÊù?v*C_ª|Öíß.»ÌžhGköæLÓK)Åéq»€t«E…kÊŠRÎïÒۜ̄2ò’ì—|˜ÛãeŸú®B¨9±pè'B×à‘ ºÄN¦¬‹kµ8 º¿ÜâÒºÇF5h¬j½q’‚í¨Ê5*OÿžÒaiïš}©˜^ŠkߪÇtŸ5˜¡¡s@ÄLP^ªÑRôúQ2!„*XlÀ‡=›fåÅ9)'~½À>¦Ÿ¨N­/øh¦gÛàü¨ÏU&¢³¡óq€”* ª\0Èß¾OúÉBßË2Œ>üH‡¡¥*‡ûiÑ•ïâÍy߀vÍ5C4&õ¦SúøÀwç5+5”¿ª€q&+‰Bë¯P.³6žjûñпÿ·Ëd ­~•ýð @ëñmª¼§”¿œÿŸñ¢ÏI5‰*Jù”**Â:¥|J•&—ÒTƒrlZt±xYøÑÕ×L–f‘±~J‰ÀÅ/bÆÌ®-áÆ#ù« Ôé„2®Þoî$4.ÒÚ_„<,‚µ¦ƒFƒä¸NæìI]–ôлOˆ¨’¥@ªÉ¥*«ÍH3¯íá`óK«#B¨¢(õÔºc>3y×äÒé:Êu;!„Œh÷õg®¿-xqÁÀGá9 P3žåIDATô[\1%;¸æÐ¸I£æõu±å*Ë Ÿ>8òÃa9T¿Pu)кk¾¼ ýÜú“ì8•¹¥™bÎØ}ö†LÁ›}`åe#ù?ýd¥J…Dü,ëÑÅ ?Æ•ãÆhº‘ý€w÷#TËwóZðÅ’!dUŒ‡Ðn];ë¤dåäfeçÔ®4 wó6£~„²Y9¹ ²:;©º—'‰þµ€¡!„YhçŽ:/æÍÊÉMIM€n];geçøx{E£?†~„jd\.Wûż”Ò–>Þ“ ç–þúB¨‘B!ñWÙÉÝB3³²[x{±“šÑ/O ý!ÔLPJ†ÑL2 “™>^žO³²sró@ßåߺÀÐBjý“RÚ­kgvô¿…·ú- C?B=_®Ý¸Å~íBjþû!s`èG¡FF <»ÅPzÝaèG¡F†¡!„¬†~„²:úBUáãíÅ>ËÁ‚0ô#„P#ã~„²RÚ·Óþa—æÑYÙ9–}v`èG¡FG)eŸåÀåê>6¿žü1ô#„P#»u÷~¯± ¼ !„eaèG!«ƒ¡!„¬†~„²:úBÈê`èG!«ƒ¡!„¬†~„²:ø“.„zM?Nó9úàaËŽ¡!„ž#oôw€ŸÎ=Ó ÷št‹¬C?B=Gœ]ù\.ù`„§\Vù(7cgÏS©,ö< ý!ôùì@ûáË)>š‡¹q¹„/àH%*K­C?B52ía}Ö½¤Ÿì{ðÝ.—T¤1Út®#¼Ã!„ž;~¾­BÚµ}gK–B®bTŒæŸ¥ÊÇ£~„z.h®ëN?îÜÅËýûö€÷¶?øz†{ìGý!Ôœ6ÄÖÖ¶}»¶ “*•ŠQ©ÇúB¨™bÿ5r…€ÏF…¡!„¬U©¨JýÁRebèG¡çèØA~ðA!kAó)†~„²”a(CØ–*C?B=×¾:M”–-C?B=ªÿ¦·þà}ý!duð¨!„™ÅÇoõ#„ÕÁÐBVC?BY ý!du0ô#„ÕÁÐBVC?BY ý!du0ô#„ÕÁÐBVC?BY ý!du0ô#„ÕÁÐBVC?BYZ>¯¿¼´Ø²õ@!ëÑè!´6¡ßÖÁÉâõ@!Ô`pÀ!„¬†~„²:úBÈê4Pèçóùƒ ìÒ¹Sì!„ý"'{[¡ Rú7lÈ Î;øx{×}u!ÔÔµš{¤q+PË›;µñy‘HÀãé/­E‹~~~r¹üïþ u_#B5 ”á80ÒÞ«‡£TÝ(»ýGiZNcW À"GýâÒRoooJ©B;;»Ö­[ÀöíÛ³23ë¾:O?YQ¯ùQ³dh7ÀÝÕ»©ÚlÍÙ?íÉŽ¹×®sÚ~ߨ5R³@视ñùüBÈ©S§îß¿ogg§·ü««×xn»·>*öÜ6Y''ä&^`d ”2Y‡Šÿž¥™Åëü‰ÇÔý-gÇøLøÂÞ²‰”zÚ Xãy°EäS8”€óœÃ”¥Ã[Î9$RžßrÎ[µ®˜e.ó²ÞÏÏ/ @3 :tÉÉÉ'Nœ 54"„êÕg_®lì*4«j,zþ•(À3|$ßÁž ‰Ú¼\¯ùdÛ¼¼‡ÞN¦°‰dÀzgÛŸíŸøtÿÊr»>ì (} åR—Ðb2dÿTëŠY k½«««££#Ã0)))èääT\\¼mÛ6//¯ôŒ ãE}úÉŠßÏÜŸàiÏ‘¤_Ù¹ãt!Jmº›> ƒ·§,5~ßæÓÙ:KiÿÁk&)uè7mf¿ 7²0#ñ€V…ùÇÎÜÁÇ…§,H»²wûÙl¢{âÓOV¬¿ôxR˜¯+Ož—|iëžË¥„ߥýÐqƒ:·pq%Ywm:šD ›ÿóÉ“ÂüÜly_üo•¡Uèm { Êþ·zä2¿¨SE£?]Áÿâ˃”°F&}ò¡båÊîÌ[­’ý¦Nïäa å¹Éq;÷^.6\I6ý¯„'Sº¶°g î]w¹Åœ©ÝZ90·ö¬‹I#5ªª‘VS2Jo+N:w0§ç5[ßЖ€–CgVß²&;œt=op»N"Zžó0þàÞ‹ÏŒïÌ€xï*Òw’ë¨%<~™2ûBQìo2¥zË;sHA™â·þÀ~põçoÞ¯ „À=ñÑíÎs?‚sÿ¡§3`¢3ì- äÉ÷¥&‡ÀÎ$NG­ªõÕSˆ³¡?66v̘1~~~………|>ß××—a˜M›6 …Âàààê_zz-öNÝ´ñ`†XТçÜ—^íuze<ðÇ.ÁëüæuÛS¥.†NøÍœ¢—³ûwÃÏW2Á»×‹óÛT¤s†¿>Ùãü¦?v¤)»ŽXôòìkŸoW_|Q«ä kwf‚wÏióßœp틃 xovû3Gw­M+,‡à f¾÷ùïlþ—<’Öÿ¾'KB¯¢z?ûr¥Î·—¶šuüúÑ|Ûƒ%`;§ =ý9%ݵJÓ®¤pòÒaÿnøEÝ?¯ONü"Znd+@$¹´îÇd…ÿüw¦-ˆ?þÇ)ª yoÍó¿³–j5¡ú[±Â¼ž×0²¥ônY“þÒ¤Ðì]¿mO)¢~aƒ^øUïöBHQ^Ç^P› ›öºOôtg ;K^qÌAH>!6ìg€‚Ê¥c¦(>S‰Þ§ôww^Q¶ìK/þ÷„Îq³)ÊWÔþ¾Kø\¿~ýŸþ€N:…„„@LLLnnn—.]!f†þm{®f”()zi+lâÔ0ÞÙ SK” Ï»}̬¸Ó:‘ÓQñOË)-Ϻô×9MúÔ^‚£.§•(ARp=føOÖ»¸fÙËQ±¤ó$6qå/û/>*(SRP–<øg ¸×äÞ}]}Œ¬Bo¨QQb”¾‹<ع‹Z*b.锦]ÉiÈ™•ýC:˜®ä–cŠäTœ´‡Ç-'É™’;{zZ¶Õz[afÏkɯwËšìp)€ÐÙÃÝA (N‹?€qÕ ‘&K¯­þ,ãÙ¤ö•Sî!Ùùª®ö¶Ÿs˜¯(ÉÍgœ=m U+™jÃæ³ÌQ?Ã0têÔ©6mÚÀµk×._¾Ü³gO>ŸÏf0ÇãÊoB1!ösö\¨†üöVNŨ#N@ÈÇ !šnÓÿWù1 :kë3†uöópñ9@ie»nk-kdzhDÍŠº³½|ÒÜ`úÝZè$ÙzG÷ @»’­öTNÅô6YÉLv胈 «â3!| ·Z_+Ììy #ùõnY C­Ø¼é줧 ò°—gÞ>½ùÀ-³e•óöJÉ’ÜT†ÛRò Úc<AÓrôdééJhk7$’¦½È¦ËÏSÏ‘p €È€7j=¿¾.÷Ê[rÀÇÕÕÕÏÏoûöío¾ù¦L&Û·o_»ví5Çû¤¶} ÐହrJí)«‡†ý5éé½*Žö{hÒSîÿoÕ S_˜ZËvHg?ÍšÓO¼ÿ¯uóKd †ñþïU. U ™«Ð ”¥ /Š"äé†Პð9™fSð{Fõ¥´R2zΧz<1³¶†XªÕz[afÏkɯwËšlyr1fÛEJ‰sàˆ7fÎ;p+ÊÌf"k–ûPáÔí3gW”(Ÿ%ìße<^Ó³¯ üû”(U<ÙœuV¬Þ½Ó¶‘AsJ“@Ò Ò{TQZ~%eá›;ýýýù|þ_|ñÍ7߸ºº¶jÕªÊÍžµ]ÅÞëÊ‹úø9r@àÞiô¹g)Ìïí#âϱUÏé34é{îÐÁsÃ}D„ˆ¼{ͯdØsI2aNßW!<;ŸS—ë]©fÙžsÒ;Ñl¢€R!“È•\ßS檰™«ÐÈèe«3Ö´¨‚¨'dÄèqÃIúÆBã9wß§ƒæ÷l)"DäÓgaz¿gVuªª‘VWo…™=¯a$¿Þ-k²oOñ´áp¸ŒÊÜSX„˜´¿ ÌËÜúÂÓí³rNþ(‘ªwø'›ÇjgÓL’UûZööñ™ÛgåÅ§Ë;Ÿl[¬>ÕþåÉæ±euí‹õs8†aˆºN¤S§Nñññ\.·]»vÚÑža§–uUþõðøé“– r$¥©q{uNð/üv®ÍŒo³aÊž¥\Ùí³é²èµ§¦Ï˜óæH¾¢0#1 |°éªÞ9lÊȹýÜí8²Â§7b7é]é†ÌvS–så)òþ%ZÎ~÷®?tû¥±/}`ÏS–å?¾¶`‘ÞeÍ\…ÆöÄœÅo|4ŠGª_ö¬iQP¾õ.|÷¾˜8(íýíßiÓf¼>Ô$yÎü²W¦÷ðÙ|luõV˜ÙóFòëݲ&[•â0aú²iNYqæµýQætBÏ-Òžýšžþ”ð¸5-ÂÑNÄçó]\\ˆáÀA)ÍÌÌT¨…RUûÊ6 #·Ü „9àA=:gFP¥ªuë–‰‰×ÙIK<ÃG ””—åææÑ¡B¡ÈV¡”èÌÒüüC-B¨91?.7< „þâÒR{{Gg#ƒ@*•–”ê¹}#>B50 „~¥RUXÔÜÞÒŽ_H¡f ßÒ…BVC?BY ý!du0ô#„ÕÁÐBVC?BY ý!du0ô#„ÕÁÐBVC?BY ý!du0ô#„ÕÁÐBVC?BY ý!du0ô#„ÕÁÐBVC?BY ý!du0ô#„ÕÁÐBVC?BY ý!dux]„Ðó…Ç·±srÙ¹œ¦thÈPªRÈÊÄEeb±9ù…B¡‹‹³ÈFÈiRͤ”ÊåŠââ’Róši†~„P®îžÅr~^0”6v]j€C@ĺÙ;« ©Lf2¿—§»ŠB™DÖ” @¸\®““£Â¼f‚¡!TÇ/*nZñ@EA,¹‚×ÊÙ)+'×d~>/)—4@Å,‹(U*%€‹yÍ4C?B¨š&u¼¯M®"<×ÌÌ´É6x|s›©qKÕ!Ô\PJ™Æ®C½£@›tè¯# ý¡*h?FæhJ—¶B PÚ„ÿ™ÝLú¼ZöÊb“y긑ñ¨!¤ƒZjÀ‡ÚøÙ¡Skg‡‘ˆ³ÓŸ?q=QJعß~0Þûz“:³Sû%‹zòT)çý~©Ä"0Q=ó(7°Ï =Úú8 hY^ÊõÓ'o×wÅ þO¼0ô#„tY$î0œÀ7—Gø’ìc;¢Ï¥Ëì[‡DÎì>³­õÑ8†è®Ë7üÝÈN^Pš³ûÁst¥Ú |y¸ËÕ#ÑGŸ–‚C‹ác¶7v¥,C?B¨ª ›æ67¢5‡”n9vê €Â´Û¿líøý¯©slã¢ÊµW'è0`Åø;Eæ±'Nƒ%Z”™—y=¦µSü-î) èIâ¿ÛØt®C@¿!½B¼lˆ²$÷ÁÙèØÇÀ€7^[²åò£Q}Ýl9²â̇Ǖr€måàѽ‚<IyÆÍÑqù@)·íÀ±}ƒ=¸ª¢§·ŽŽFÔƒðxÔjh‰;3¼v§Sª æéçF‚w¥'4ÙÜz|o·à^ÔŸ—n3 î«¶ä—Ç8ùDO‘K&]äÃ#$éäÅ[*j‘õÖ ¥fpÈÐwAxÍ–×2ŠË UŠS.Ä€ËPÍÕ×èc·³ÊTT%ͺ¤›>º#÷ÒžÄ'b%•>ˆÝÎ&Ž åŸÛãi™’J‹ïüsZЂ—yB Ì2>A@ïVæ ¹Úå¯:õÁ`Ÿv3fÏÙ¶ùNŽòS³Nnò|)}\}iŸžã#Úµr³³á±ã9Œ¦´ Pÿ&Ž@!|6Ýàxµõù/}µ¿v±YpÀ!Ô°Ì‹‰&mËÿó†i­éÓÕ)Lë9gµËÏ»pøSÉðÿŒñï:qîËÂmë®Êë¾j :% -áLÝôI“ÃËŽíÙú¤P,W2ÔãíåÓ—“W51 yÍÚ{¤.ohÃ„Ë ¼äÿû„Rǹã†ù yœý:½9WD™œýkÌPʈ¯žøxO’xíFÏ}½¯Mà øPó|²v& &MêÑÊGïØ"lè6Ý@¡JeJŽ­w—“¡¢4¨Z¬fòÈmUŸi¡-m9”çܶ¤:ñªtä¤0_¡‘{›ž£æá€B¨ñX⨟л߮–O|¡Kïi‘£l¸*iiNRâöƒ—t®åR Ê{§ÿ/Jºb~Wÿ!sÞçïùæt~Ý+`ŠyTüÏÚãLíí"dÄy¯Å°Kmû;)rØŒ¥¶ŸwëÖmY$$¤½ÞeÂÂBë¿b!„ê‘öø¾6ƒ¡!„Ps…÷õ#„ÕÁÐBVC?BYÿ蛋6 yœÐIEND®B`‚mistral-6.0.0/doc/source/img/Pycharm_run_config_menu.png0000666000175100017510000007444613245513261023426 0ustar zuulzuul00000000000000‰PNG  IHDRа鎮 pHYs  šœ8&iTXtXML:com.adobe.xmp Adobe Photoshop CC 2015 (Windows) 2015-09-09T17:13:31+03:00 2015-09-09T17:15:48+03:00 2015-09-09T17:15:48+03:00 image/png 3 xmp.iid:3124f3ac-943f-b045-9026-2b79de52f1fc xmp.did:3124f3ac-943f-b045-9026-2b79de52f1fc xmp.did:3124f3ac-943f-b045-9026-2b79de52f1fc created xmp.iid:3124f3ac-943f-b045-9026-2b79de52f1fc 2015-09-09T17:13:31+03:00 Adobe Photoshop CC 2015 (Windows) 1 720000/10000 720000/10000 2 65535 394 176 •žF cHRMz%€ƒùÿ€éu0ê`:˜o’_ÅF@zIDATxÚbÔO=Å€xo•3 ÀguZþ‚³kÙjÙØ˜þÿWÄÔ&e‘a‘Xi‘XY­ôYœ‘K’*Îbœ¡Å¢ô¦ýKšÀ÷Ž ƒ 0100¼üü©Pÿ©¢.Þ”îØ´~-ÃÆÙ‘ÐH |ýîà , ?2…b^Ø­Y¿¡®º„áÃFF†Æ–¶úšªÿÿï12*Ýùÿ_…‘±±¥¦öãÀ݉‡oh1100üåa`ee%I›¥6MŬ©©ùÿçÿ¿ÿüÿÿŸxmOnž¦yhøúHIJ ºH”””|’ -îí¬¾¦ ÂøÿŸ‡ÎÎÄIž±Lö«Ú?úØ÷ÿ¿¤äÄZœªÃâj0‡V±¨ž.=mÅ`ñõ“Îú~õÃ…K—!|w>ÿÿëkªêª‹iä¦6Nœ%‘‰²"V¯ÛðY­spE"ƒýTG Rñ?š8kõº ³÷“mÄÙéÆ$ÕÄ–[ §Ï]```дvùå绋ˆ7â÷ïß,,,ŒŒŒä¹ 6*âÃ÷?›×¯‰ŠX¼lÅ . ÿÿŒmy†î ŸŒ,?þÿã(]ÃÎÀÀP—`ÊÀÀдà4Zªf€ÅQ}MUÛåŒL åÚ›ïCåùX~ú3|É÷Vù&Æ ƒ«¬úÂáp":° ÒœÐH© fƒ·j¨xÙ°.tøÏÀÀÀhcï8Øò,¡3000Xÿÿoûÿ?„IËvÿÿëüÿÿ_Èýÿª7ýf,X†V1o\·l‡Iz*«h[fÚØ;²³³ÿüùsp…SsCÝ ‹»»v#Æ9ª#¾¾¬eA‰4œ/ËÇBSûÖ®›aèIóh‹¢ç­ë& âpb.g``¨«*¥“°ÖéBX;ÑÃùÿZ„¿¾÷\ˆ‰‘QÍ9ô+‹ƒšø£·ß¯šI³4̸xÕêØ°P6fda¿À`HsˆÑÆÞ14(`Puc Õ Uüêš‚Mäªûߟ4, È §¿¿¿w„|®Þ(Fe7™@º.±Qk—.ÿÏÀðè®Èß¿ØçàìJ×ÄD.Y¾~0–ÿÿBo£·Ç”Ä =```òþP¡ò·'ì#„Duñ¦µÁl’‘¥²¯ªd```cbd```dB–ª¯©‚4è!$[@…Y|%[p…Ebå­ÊÿÆ•˜nR‚Ö:Œœƒ(M Î.úàMSÿJ¤þ•H½ ¢³ #ÍÅÈÀÀ ÉÉÈÀÀÀbœX—`ZÈVbTWe6$ü¬'ÍÃÀÀ€Ù^b@ïTÿÿÒ‚MSd¥)ÆbQæRqø€5‘-=äÌÑZŒ7P µ-ÑRtÜi˜R߯~x}ï9| ÒöSsu×á“vŽøÿ?:³ ¾¦Ê6,mTÝ:$Ef\}MÕÿÿü!–0¢é„*å­Å‹üƒM”ÅÚJˆÜÇÊÊòû÷ŸÑ\F ÷}Kåcbb‚¸üÿOl»S—é?ÛÿÿuÕ%ÈÙ­¤:?¡:-ªýÿÏÀÀQ¬úÿ?C]5´‰PV^W\[]98Cj´D'1M­[½ÒÉÑaé¢ÈrY…¥VÁÉüÿÿÃK]EZi][žIfqyCz~Éð )hšbgg—’”¸ÿø!ÃßÑÔƒ7MýJæ¹ÿ`ØÖùs2CêÓÏÏ©œÈêÿ‘'f†(ØØí„úó!uûÙˆ Ó³#!3C£%:YiêG¦ CH`bð ºÄúÿÿºê8ÛÿÿëªÃ ¥¸5Ré>„Àù™fK× åA´93Ì6úª|;È,Úhš" @Ç Öø‡­=ÄÿîxÉ<ÒsöJÍ?àQJ£`úûûûÂl6Z,&¢IšºyófÅMIÑ× oè<"ü÷ï_vö™Ö8HHI‹ˆŠ ö4Å hÒ:ÿÿÇDÚiÿÿiK#033§fdië ž`ŠŠ````U´í÷‘ÿÿì\mL[U~/ýâCè—ÉÜœl° &le*ÙÌ0.0Ê i˜™‹bºX` F[úqA ›1ÖýØÆ:¾cŒ™?(dF‰šÍdšMÌLœÃUàŽv|ÞãÓÞÞÒÂl±¬wÜ÷ÇMï9½'÷]o0™Lw„÷²’3~û­NW;d¹Q¯¯¶ èUå99û,Ã#uºÚ¿ŸJ ¹ssš±ñ{~ã6òçcrŠ«ŽäïÍ~eÈrCq8ã`áÑ]©ñ¯ææY†Gž(ošžž–J¥w%3æD5ć—5·ÐäÍûââ_ø=¦ ||¨ ~8ßð†NiìÀ™ö«A.ûõ“›^íÀÕ¿‰lxÀ}_…¯šÂç“×W·ýLò e öÒ(š›WHò²Ëö{$-V•ä%íá&’.]’äÅ‚g6显&ˆ¾\7¤¸$o้³GÃäG’—]VÖ׳tþ¸fI^Y«ô°‡{¬U’—]öY¯ÛˆÆ%yb.ý3t÷ €¿ÇQ¬‹‘®»ocÞÎ0*Ìá¶Ñ³Y6¹\£^[xB‚ؽ7(¢éÛ÷e“Bt‡+ÎÍØøÌBAœ-š:oÏMBTì ýóÏT̬<Áû*ޟכØ4õÝ ˆž4NøñÑ‘ÙOuû/Gùþ\FžŠM/¼ÉoÂùa«ÁH’ýhb,¨C>ìÜ‘‚ ýéÖ/°æZB’¼ ‹CǯD,\ á¯7 ‘H$9Nˆ,–ƒSo_R”%E{ƒÝ’¬gââl6ÛÚcT¸ ážâÎ>&Êi¸¨Ó܉P8>*·£k:ºú÷®ÛR¶§ìHK~vËcp àR‡Ùkez4âÂK.Cûÿÿì]{PT×þÎÝ»<T… $Á¢U”¨+"‰Ú$Š»ÆMÅitI@³ËÂjxÈ$ØL£2ÍÃ’ÆT†àbcjL ¢&“)MÛI±6‰™TÒ”‡òÚÇéwY/ûå|³ÃÜeÏ=÷rÏ{¾ûÛï|¿aüÄÜ€)¥†ÿ·ãÄ–Rå [ÞdøÕ<‰ÒE~”juOK9ÂÿØžpD²Ä4µÓõ©bÛÑÅ©¡Ò¸Ýõ”O"‰åë?{8}ÉC­Ôj¼“c· €Þ£PçhŸ˜ëDéT^\ªÓ©(•Åã³NBª‹Þ°Ðóue4´ÀÃ/vÆY§‡{©bæÝtÉKE{—Ñi3+ö:¹V§@©Ôn;[—š£Sgõr­N›?TUd÷§1°ùŽÑL†Iqø%¶ÕöýK‰ÇyÊ. ÃØÜŸ¼BÅšPã6ÿÞHꔇ:jÇcüXóC˜Ÿ84&Î,Ô8׫Ž\ù­Õ½•*¾+ú÷+è0ñÔÕÛÝÝwÕäÁaM Å¡X ¥$ln †3©uÔ` ‰°°=%Qs9ê”Þ² C|®ü¦Ó4+ú1§áb¨ky 0þå¦Ã][*LüÕœðq¡»Ì™bï:£´°1`%¾ xݰ`b#þ1;|qì"vQÆ žhNзɖæU?X‚Di)É®X¶þ… Ñk6ê5/Q deª^“åãP-y™ê—ºœL;&d{Ëo(6lÎq‡¥‚’mÓü‹RŸì5[7,æÛoUçìÜ–ÂØÕÍ R N=^[ýZù‘#eùvÌÉMeó(Pgˆs7ßuötqÜU¦9ýýýýÊõë µÇœœ¥D'|@ø’áVœ&ú¼_WySj[aF«Þ(&ÄýØ‹ ‡ Ñç¥W¶5T6`w^îžÂýMï~¿lÌÓþº¸„Ò8¾Ë7^ÛçŠò3Œ)IJ”N#¤€ú™4ÛG¼Ô‡uŠ¿ñŸþç’;>žøÈê“õ§„?]eF8œÛçlìÈ_s¥ÀÀ RL0÷P†qâãÏmÛ*•JŸßñœD"aN! 7O9÷ìCUGH^Ñ*´…æÇíò™3}ÅîWxJ×kB<É’ -ß>X~}n/Ì :–x篗ÈSÆ€ó´+—/¥»táÔæÊj;Çö÷ ž l=+rµ¨ü)úM‹!ЧyÈWý @îö$/1‰\© ]§ –“Øt-²*;ÔWÌ»¾2ÜXû¦æ~zÊBåøE°¾ë2Ùû:Dšo뵾Ɍ•˜þò6P€ZÏ¥ÈÐ KáV ;(çîö{¹È?×f_{e0hÛoÈÛ$åÈÞWx°çs{KôºG3è;`æH_Wc€Â£_h÷½ßh¡~~R»³%³HÌYíq~•µßRVy†] O “c¾KQ)WÄ)jŽ¾Ç¿Š öÈdBž´;/wǦµ®:Zí/vÌ,ØÁ.™é¦°dæíO11 Ôjõ™3g26oyòš‚‹âj±‘pDã­º€§þ7$É´ÌúÑSŽÑàt÷h‰¦øPšfU%ÄÙŽH©7X2srãz~ü@Ùo·>»Y¡P|ñå…z|…ñ‰>®†sH³êÙüÙ^û‹~ã˜Nm_•œ¿äÞOëìy:ß ÂÝ/|Û ÷–åh4×.5¾>ø !WÙ€1þÄ05óO cOrÝlQvȵT»ÂQ¸½/ÃD¢®fKž˜Âm-ä[¦÷íèéê± ˆƒ|û³¦÷Ï·¯6B½/Ä!úÃI‡ #Êã@¨<¹•z_ÏÓFÙE&Ñd¸éç;0½/ÃØò'¦÷e³xš½/Ãøá˜’Ö•x ic›;\ïË0V¨ª1p„PÐ$¯ßàT¶;Yô¾£ÓûNAŒ(?Î|~FÃÇy¯h~cí£?Ç(ë)Uu°«‡Ç“ Gk޵Èÿ½y_ÆÊ‡­ÞšB31Ilg¨„®x¸ÎLƲ"{°CØ2LFgEt(}иóè]ü¾¥ñœÄäh\öÊR#€’µ=ÞšÛiw,W=g©:d Ëc»F~ (¥”2W™q‹§À„i+Êj¿û¿ïÕNô±“» ¨ô¸€xÉõñxõ«§[¬^îC»ÿ¦\þ—Ãx‚&½4f±ÌJ¶v—¸â9`² @ß GÝEϾßÉ€ÇÃnà{I‹©¯FØúé~J-, Æ’?©’Ö56M%Ç¿$ -µWþ4#êÚóQæW«}ÚÈ­I¥š½Ÿ\èååÕÔÔ”¢y?0*‰–ÔO>~gÃl6WWUúúúzyy8|øðË¥ûüYT×óO çÏã¸!ÓÄgÍŸ_Ÿ”¸ʤe–îÖÚú³ŸLy·¢òNº ---J•j[f&!ääŸ> üÉ”à:³$+"/~òÜW×µÙý¿[Q·zí,?®¡úx+%.ù“Åv¤Éh¦Æ¾áïx‘nzj<€ŸZ·—ß~—•ã´yúæDŸnh<ô»r³yªd˸o›++«„Á4lM§N©¿$s{² ¡ð ”{ˆzEbÕäoQ–å‰iÁµÇ@HÛü‡Ò𯭨Áí6MBD"Ž…GDN¡É)4&Eyù¯'Ïý³[™œl¨ªrßÜ6) †Ý¼f‡iŠñ'†ñÏ?10ŒÿgïÌã¢8Ï8þ¼ì.Ë.7¨ˆ¢h ’š*•CMMêÈÂÊ¡Mšx¢&r. 6r‰€i5jc"%Šrˆ´&M[’˜˜ªi›(®Ñ$hj¼‚rè²÷Û?† ³Ãì‚°ˆÀûýƒÏìì;3ì;¿Ïû¾ó¼ÏüÞúÙ3{iŸƒBOýž²BrfúAOÔìØÄ úIŒm§ÿ2‘’š¶Û/ òt²t¿N [ ó˜Ê}à,êb‰”zÏx&6»þˆÍÓ>±fgéÙVÓ0}ž˜Û‹=öî\ë­³Óׯ®%YWÅMùìÊ]õÝ€@l%øù»¯¹‡~_æ®±Áމží±ÍKÛN·çî1¡2VNCbWª<í…¢eñª 2¢»Žn0ý¢(‘#ÿ•2¢ž.ô¤ÇØcPÈÚ¦àù‡'­ â<°0ëœãèôØï cWgòs¯ÞUyÍ]k슎›6n~ü‘p}ü&{z'Ójûû_aÅ jÿö÷¾ÂŠÈk£¾xÆÎÚ¹Çîâ®ç€ñèàŒÍ«a\‡aßu>ó~!QOz²@èòFâz,h´FoyÎ^cÞ¿Êo„ýœRVéïQ<, B0“žì\FðÖº(ýH ¥Ûÿœ¬’ñư9õ¤ÖéxÖ"ÑüQ­Ñ–ýŸ 2Rgwó ˆµÑ³Ç(r¿ÍÂÛ13€o»:ó@ç¯ÊW=nÚ‡‚.ƒƒöOqàǾSÖ…žTµJ£Ö!»8ßѦm´Ž£¸ŽmRÀ¹œ æòÔz«HƳVO W[ Ïäæ ÄHÓòÖ– Bö<âÙ/[B}Åë~ëR[ont¨iÒ½”ÚUû¤U«´*•V¥Ö«ùHtǺ‹ÊN±vr.gÀ‚L@æÑsôþ©1Y¥?°"é]¯¶@xHV¿÷ vð8þÁ"ËýëZo¥‰F¨;ÍU×ýV«Òh”µRPÔhUÉ1+¬ijøø·©ç?’oçÖýkÐi ³Ÿ$åW$`^Mu·Ð)gi¬Àµ_æã)K Úƒ“5»›|í-b÷s÷hv`ûÀ·Vè,5|ÇÃäMYBÏéxÁ¥ˆg°lЛþŽ@ z"<®zz~Á³¬ ¡çzªštFäj™<êCí—Â,·5=U¶y팑ìã»u$ñÉÇ—·¾^YZüBàˆÎ_m©(3, }51ö*>Rºýåi¦ô¤ ³‰œ€'Ôï7àÿì¼`–yækY9¹;÷qÙ  6$$Å- ¾ŠKHZîÏR ÆvI)²ó\Mü6Œmã“SV/k?vÝ«‰´ò¨S=³ì÷[’7Úc .³Ã¶¤%;’'†^Qù‡WB$1ï¼ ã#’*ÊKbFµWé4„Xñ'¦¶–„.£âûm‹‰”$¿­4¥'• ºßÜÚÚÔÒÜx×ÙÉÙÉÑ‘[çÞ€ˆô„Ý;vºFí|£`ÇÞR>«dLúúÛsþÓ©HZ+”\bÒã ó¶¿YÖ> ½÷õ|ºd[ë¿€_¶?3oצôå þ¢<3{Çœud)ß^ñ|rneÉþ`‡#yaˤ’71EFâOtûTxÜÅs7§˜ŠXlS-æ€kOž:e,n€üâ f'¥A„˜J5Æ€ñ8ï€ÕÎuœÍH™à›wüi'kçB–D½¡hGjÀñŠâOÃb¿i¯R_ ½«>Zdò áÐpJ…L´O«0µv”¶éV¨g·Ý¸y³óèG¶yÿíB(ÉÊ[ûJBÒºhº¿‹‹Ð@ÎÞ/ÒÒR¤s@qÖ¾$™lå³7Mü¶?gíONY+aOØYu,—¼iWÖa¢s‘ZðVEÉ7_ˆ1:Àúgƒéù–±! •GŠ$÷^ànqèø¸§ÇW¾ûž¹al(ÝÍ™ÚQvhLhÊÙƒ¹ä.:ôD ˜3^@ ˜YO£\]¾¯·©B¯ô¤œŠmmmïüü³—×¥ËW~å7…ÇM‰‘>¦z²ôw¾¤ÔëõuåÞ“'ÕËåS||ØC­ïË´Ó ô›ž4¬<‡5FZŒ>¼^.÷ž<¹^~‰£xäû²ì+X:£ËS0í4˜g~A„ #ušïnÇ÷[šH]ÎU„à'àÛe¬q€Œè©H4’º Ë ƒÐ]=µiJ½Òæ¤îæ­[Þ“'S­«håûš¶¯0‘Ì<³óæxŒ=ÞË>âo¹bCâRŸö@î6 c—oM‰çU°½ø¿¸­=lÖÙ ƒÐ-=©U «}÷D·y¾Þ^uå^“&ÖË嬢ý•ïËi_á/â.Ì´Ó`ž9çEé?"”wºíƒ}……Ž1jýüƒì]EY@JL‡…Fg' BwãO<žÅŸzù%¯I/^º¤Tª†r½d¤-Ý–}œè£çz"PŒ ®š9rßö×ïõ6ˆžÅø‰@0 yKcÜÜ¢¤ÎÎÎ*•ª®®¾¤´ŒÔ¡‡í“P(\³ê%+++OOOooïÅ‹I%áÌr[ÓS·$¿2MÌ#M“É$ d-=Í œ0aŸÏ¯®®^ø·•'>üÈÛpﵬœÌ¼?>ÿj¸ElI“1³u'"d˜ìkÌ IöÌý½]UUUhhè¬W/nûdZõV£×(>ÞU€œ?=’ùŠI—fçb<›z z-+‡–TDzÂîì\ŒÇÑOHt²/ý1&=Žy83Ù7iSPɾe¤/ß–]"þ¢<óÊHKü4»€Ü³ §º‹ò¸uksrr —~³Ièhsæ™kÚ›Xt1Aåè^ òD»jô:gý’dß¡ÒßÕÔ^8{öìêÕ«@©ikÓ¶am‡˜81–­ ‚êp’ì; 1ˆ?ùMñõš4qïè*Q²ú»ŠÔ¡Wz"ÌÓßDO„ÇYOkUR+R#Cëö÷¿/²ÚwÕ­uRyp‡ »ŸïKx4`¾l C ÆÞÃì/_J§åÛZó¹4Eò¸D=Ê÷%ô{Žúí Ý7Œë q¦DúÇßW¥Q)5J¥F©µÀÂav‹ö8ß—ÐGØÊ"ª~·È›S´2úÇßW£Uª5mjMüý®hϽΖ=Î÷%ôIg÷Ü‘•—¬¸³Úc̦·ßü}HàtPOn¡Ç°ü}‰˜æŠDO„ÇTOÄß—`N=õµ¿/¡o÷0¯XÇzòïL_ûû¶?ß)ìl&¸ „tzݼÿxÀ?ª?å(î·ÎïÄX°1ñU|ý̇?ßšžÚ¨ÔêNî+;Ë\øc»dÙú›ÿ~¿¨ú¹Ùëܲû_cT*aê‰zò¯:VN‡†cL¿W8>"©0rjņÈâŸtróeHóNEÙËaáû"Tk´}zþ¾„>åÐØ\T*y׳4Áb°ùûú¦³›Ä‰UÇc%„Eƒq›ÞÁàïK±˜ æ%º"ò›_u¹}œdĦ÷±ð÷íÄßwBò} æ„ße‰Ö y¤š­'IXGÀ©î¢¼þ¢œÔ¡WíSiE%µ!‹½&O2-©‚ˆæ„£ö=»jJHÓöJRûƒîù;…Bñscc7OQÑÌü˜#¹o…pдV‡ä³'U?„ôÁ»çåd¾ÆùU~°ÚB 5v`ñQk%FŸýÛòçª-Ú¼éòfªáüpà_B´–ÓšÆð¬€&–. "šÝæŽÐ…>Ž%_cgŽpÑ DÚ‡ Þ,>ÖSä鹋çãÿÃMûD ×ð»sö´j^÷po€äÓ–€²0X‡ïãàG úâ$Gsõ? w_EAx+صvóÌü‰ŠÌyʇª‚šý3´Ê&"…¾Ò%¦šoÎ 9ÉR#AGƒqÓ0¢ñÀpà´VÈž«A¾fgìòÛ®Â>j€z,ÝŒ1ÿšà§Q:U+QCïéˆ?IÂBJ+*}Ljý‡íi½<æ´ÛíÛwM¼@•¨ß^ý;ÿÎtŒ±­Ã¸°cŸÿà~³ßÔ§-,xD½ÒÓïɆOѸ¶~¨„ÈU*ÕÁƒ[[[wïÙë6Ö]ñ@ackKÄÑ«ñ“ÐKKƒ×ÿž|.Zö«±âAY O=õTbRÒ† Zšš† 1DÄ4aˆté³%eîg}¤wRûÅn>‘‘ |9ÏÉ;®}©¸Ñ®Éáz½þÆ­ÛôÇæ†Ë_×Éo6kb£¤~¾>5µ0¶ {ÂAsùÇÆØ(鸉“žtÖ}{­qÀU뵫 ã=ž5Úí§ëÿ»wïž7ø;;ǰÈúªò —¾?_êž:N™³lîL„PMíJCÔÔÔ^ ¶5-·ëêê¾mP ¤6ÕßéõzVÇüøäìy>£lNUÿó¡jÏܨ¥å‡`<™ŠuW••ÑÛ‹³æðx<„ÐѲrµZ=ÂeäPhœ¡„ñ‘º­vç?ÿ kŸèb°(*øÄá¿<Äxœ¥§ó_~r fù²«‡+c@´r˜ þà|~{%89㡱 Ó•¾×WÿvZI±|ÁQ.1ŒŸô†°ô°`qtDÈ™ŠrŒ€êC]&‰X<§½U[.Y8àýæÑYSênÅ‘ù C¥Kç@Å•{œm'±QR!rŒ’Úseš›'?36JÊÔ/aÈBò} }/ ˆžÿgïÌ㢪Þ?þœÙ‡eØDÀ ÷%pùº’†õA@RÉ2+2SGQd+!¿-–f™š?S6J3RS‘Ü2tD@FQv˜ýžßÃ0ƒ¢¨çýâÅëΙ{Ï9÷Ü{>óœ{Ÿó4â­Ü'YÞ‹ðö†@ Ÿ;ð3tîgÏœúŸQÎÖ¨4ÿŽ0Ãå•yþ¾³½^ž2~Ô`[YÙ­J}—äš¶–?öÌéëÆF«ï^=•#G0mNT¤ÍÉÓ%†žÑ£´@×òï´Äî®áù¦Ã—wꉚ¦’‚êþH1ƒv8Õƒj“þf,¤°ábi=óì)£ßšô•3uü±¶±ÈcÓ‰Aã­ éÞÍóûSsjU½·ƒôÖñª›÷¢i.}̸¸éY©gïw>Äþ/Ú6b}ÀgÑ©­/ÌYü7oÄo"»•üàƒqIÛ~×8Š¿Æÿ›˜¸#]z?s"æÈzÁ"VÌnB0:ÜX †èË€1k È¡˜xh“m” rÛùsÆ8ö6¡Iªn_ØÿSvuKS.˜èlÅRT—^ÎÔÝt>kùõ1qÙa<,JðZFLÜU„0ž%0‰Ž=ôˆ·ÆÌa¯ÎŸæfgÁPT•\HÝwRôp³D "¿;Wä7ª¿%CVyóÜÞ”œúæ½áñË“z~ƌ䃼N32Í7úì 㤠ºßRuöo¨`yiL\R³$MŠ,ˆŽÝ¯#½õ@¿‘ GRwý[R^+3 ¼ì„¤`þÚñq±çà «#XŸGg,æBâ--«ÇoN›°È}€%KZy[x:í÷<™¾]¡KÑé3¢–[lüª&l¶bGÌa‚P¸ü–+>²I‰»Ð®ëNÛþ§ê”M¢Íkr‡–—ÆÄmniŠ ãu5]f⃨Œ³?oê¿æµŠDÑœÕvW7‹LWN¬LŒÓ¹´¦5¤5=H°ª<&îËCª¢‹¢c÷hìãxlßWÇU;LŒÌ‹ŽM&ÝðÈÖÈâË’Ë’aÌø©dè¡Ý-=І¿20]ͱñ€12wš¾À.:v—ž%6î‹ßý^äeq[`EÔ¨½Ñ[›§MíÃûöt»]MdÙiÙÍ"΋¬Ê‹M2à…›w½Ê_=ZÁ½þiB±ÂkG}È`íŠ.ÐrŠE-mŠPƒz‰÷‡÷ú›I¿ýßbZ7…öc´ôÎ:¹QŽ%sw´Ô€«ÕÖ¼m«V4€†Vå-ŽŽ-Öml‘ŽGx$yRCVd¦,)Pöq*;âWÎRï:gSLIÝ;=íǘ¯'¾õ‘S/#Wæ~ó“j¼ÓQºËÑÞ¾Üû™Ñ¤µå—ÒwTè­ñgVGFÁ‘èOËÕ‰åŸÅ1tæÂeîý,9¸A$<ÿsÆù`î>u¦ÇP'[S†R\]^ðë–DýÍ%›w(‹·Ç €´ï° ð˘rÐ[Ý÷ÆlýOÐÛ«­˜rÕànŠî¦»pòuS/n*„Jâ¯X­bÓ x?&ûc>qñ zg˜}/cš´ºìïS?ɫӨç–WðòQý-ò…羈9dpGèô'\Ç‚ŽÂIN¬½cÖñ‚Ưt¤£aŽž5ï4ÀqOkÖ§ª „Ç,Oâ÷D ôÜ9Þ²aæ-ÍI£½ä *«jHC„ž(OZxÇ!$¹×À8%¡ß#0 ÂS@û¤Äfq,ÌyÎv̾²·Ì½é ýG "UëÖ|æ;Þì1¼BV?”mÿtÖÐô׊{då>B³&¼µe÷¾ý){w&­Q¯ ­chÕÆºôÔ%}˜ºwî(‡¶‰šÕôÉï%ìÜ—|pßöÍïM¤a}eA»õ$SÈ‘BÀ¢s,LYfÆr'iCC#U$6κX—=ÕfjÓ(ÁÇçb¡Ý‹õÇŽü¡ ¾<¸’G "¿ýó¦ß({+#ƦØx­â²1&NÇá¥zB·r`ÿÊ~swhóÕ´úƒWFô7£‹E×ÇF«–H¶î÷}8EcÍh­+‘ƒ¶¥¥Õ+—k¤ÏHO~%tî[`nlV:Ýwî©G'¹ä»\³l Å3Û‹4rn£N- Œ‹2“6wê ¤ÕZ«Þé0[t¸’[eøúÔ>Cl9kÍëÒoþ— é8\kõÈ;rBb^pâ¢5ñÜl9Mw„çÒ>ý.»Å†0?ðIx²ê¶Ì:¸o¯H[5i+=ÉJ‡Tµå=ó›Z” i ="O )–?,+rÌ<ÛÄþGÁÆ [qZ­'42"rÉݘïEÉ06Á¸¡¹ÿ;´Ý_«?t§™k¨•Ž åµûIÁØdzøR㟒>ý©œunPõ„§’äìù4gOóímupo¶ÿ몯.·ÞÀUHç²HÆc¬·»¬þZ¶C¬¬«~ð7ÞöHƒ;…B‚˜Âì+ î_2|A„¯$lµv|t\NÜ×§CßýÐÁ’C5Þ¿u!ó±_žN]ÉÛ²aÝûo¯Ó"yN ô4l§¼þÆ'p°d*êï^ÜjèxëXñž”T3&Í J«=¥1H<î¤\ñÉÍ¡¦ðÌ?ÿÓzö/äçç×Þ±@"5ºÃ49G£ÉCÛ=¢ƒ5@xA­B­òD O-“@ <;òä=ëeg'Gõ¶jÃÙÉQ½M OGž9š/+’y3daœ ç?eoq¤Áì’‘²JuG ˆ‹#ð„éÔ“ö ÛÞé©=KžäÎHnÎ^`g:²¯™] s;3k;nq¯3Ü.>v¶¶~s:ÕŒh,ží`Oß•kýMžBì±ÇèhNT˜ðì2{oê·¡žf `˜Úö~{ÓN=L[:æÝ 9þ`¸7yûÄA[RSWNu0¥ÑLí§¯ÜÿÝ}5ASž$c˜ÊZß—Ôß­¯ÕÔTÔÜ/»O©š:y›Íž™"LÉêD9™I1GÆ…‡5'z}´~‘‡=\˾¯1Qï¿~Ѹ¾FÙy,â¯óajíÏê¬9ëCFÛqâÚŽ åw9\Ë/býâær=ÃøëüXê‚Öްã"­Õ`Œ Ô}¸ `thF}<—­|Iÿ*Ì4â™p¼®äÒáíëÞP•Î÷v·f!:ÏeÎW™©>âBGˆêìÝ¿þ²µÆ6$e×ST}Éñ¤MÖ«¾Ðó@M¿'$Ê(Ô*o ¹RfIÝð ·µuUuõÑãÙ†´RGnð`aÜQËuôð ÿÐç§àezzj÷{òžõò¿7 oÞò7öü_T)‡íi­ñä(Etl&¹ „çgÒ-Óvòü¹£û[±åJ.&ÿtª’L@!ˆ<ƒx„Šöˆt:Édºº¸ „® óÀÝuÆXxã†\.W*)ÒpáI[O’E\Ét†‚Na yׅׄùî®CÜ]‡\æ_ÏÏw2døÐ¡»£ìH0_ƒNAÏ|˜î3.û€Á±Ÿ§¹k /¨õÄä™0G³¥Cšî߯cÿ!gW¢«y×ètš»«+\æcŒ‡º¹Êd²¼ëB­9¾˜Á|}Öòëcâ²ÂxX”ൌ˜¸«a<9J`{¨m¹ªÿêÖ°™ä¯;»•üàƒqIÛ~ïô”khÞQ‘ÌM1Y¸9ìòDÈãâ~¡XÃ^?ÍÍ΂¡¨*¹ºï¤H[ã0-\g¼6uX3.M^WqýðîC7°*Þ¤ ù¬˜ŠêÒË™LJ5‡ÅL­ÙÝ%O™”bÒ€ÃdÙYÈ|$Ê“õƒšl¬{Y] 1w×!ªáEQZ /T0ßÌÄQgÞÔÍk‰¢9«í®n™®œX™§n1qíËíôôs!ñ––VÔZç?ŽZq.ú«û`½"Âõ`\lè‚yæææIII¹¹¹~~~e¢o¶û×…‹nnn æŽ>Lkv­oú¹–ƒ&¿½>b‚Aµx;oÝœÐV› …„ñ%ž[yrwÒ«W¯Ó§O÷éÓ'>>^©T†‡‡_¾|9{tÞ·¼ŒeÉ«ùí0‹Ívuqé$cê!g¹EÂ¥!¤^C¹}¯ÖX ãK <·òT[W'•J}||àôéÓ§N 5jpÙÜÞVÕCÑÁÁç×W|uôúIÝùbiÕÍSÛ£ΪS6%_©7KÖGn(á‡Ü²&Œ›*ÎíLBnïëS×î;máŠ×¯|Ûw(–wíœu”Þ¶’­ç‚\F†º^Þ“R¤ûð˜ÔK¥u  šÊs¶!𹽄GAóÙÓ‰?NòLy4íîÝ»t:=22þþ'O,—R :ЀÆã‚1«ÌR‚([hHè‚6Š]‹„«#°A0¾Â3v¿§ýûOð|ÉÞÞÞÄÄ„Ãá$§¦›"dscz©‚‘ROkĤíµžTÜ.-½]ZÚ6EñpŽˆÙyD•ÂS•§öX} Ì %„nx„gJžš‚é’I4Ì&íC zØàŽgÓ›9€-+‘44* Yç)z“^Ù©}…(ySݽ¢ÜôŒsµ$‚" Œáýƒ©^pÂ×o«ú•±zmKý¹Ìš°”8ÉÅÎŒ©¨ºýÏé¬~8yOG&êôué©åË|_.7´DÌÕunû±Í™Ò¦,[<ÑѪožÙ³æ«?)ýi—'©Blâ°¹l–Òœ'w—666)KšŒs1½¡“|ŸJ0_á™Ã$!uƾ¹>(5+a·/¿±}çWý×èêZEäÀþ•+üæîÐÖç­þà•ýÍèbѵñÑ{Ë‘¦ÄlÝïûpŠ>%‚¶…Ë5DJÍŒôäWBç¾±æÆf¥Ó}çžzy’˰œÞú™Í`³L)ž±t°TÚ(¦]“˜]¢wj@=®`¾Âsh:Ùoß7 Þ—ÆÎIýÞ~î’Ô¶ŸdËÌ N\´&~‚›ƒ-§éŽð\Ú§ße‹›s3?ðIx²ªYfÜ·×?¤£R *QC†tø‚ùM-JдЀG‘'…Ë‘†%ŠÊäœãb£Z @×Ëzê†`¾Âs1¬óLÙb…PdÖÁ–”-û&øŸmcþP#Œ±~ƒ $ÉÙóiΞæÌ­³îÍö]õÕåÖþU…GG&•Ø©$µe+†X#X-VÙ#o{¤gO …)z‰Â蔂Þ`ð5x”`¾mƒmÏ(!é«n†Ì· ælÈJŸï;÷€:å­cÅ{RR͘4}TÀvÊëoüwÒK¦¢þ^áÅma¡ŽZÚcP‰ºí)Aâqÿ åŠO~˜èh5…göøùŸÖ3X¶v¯qQˆ”[Á29 t yD žÚ­'Ûÿ#>á)CÜ2 ³#OÞ³^vvrTo«6œÕÛðtäéð‘£ù²"™7CÆÉpþSöGÌ.ù «ôXwÔ€“$:¥½3Q·æF{…¿mïŒôÔž%Org$7g/°3Ù×Ì®…¹™µ·Î¸×î;[Û¿9j F4Ïv°§ïʵáÞ&d"1`˜ÁÏLsYzª€Q§fïMý6ÔÓÁŒ SûÑÞooÚ©çiKǼ4Ç?ÀеîÜQËuôð ÿÐç§àezzjwËôžõò¿7 oÞò7öü_T)‡íy–ðä(Etl&¹³ „çíòÔñ<îèþVlùƒ’‹É?ªDĵ@ òD O ÆcÉ¥~ð'¤) “§@ÿ9ZÓ›ššŠJn_擆#Ý®9w)é£[HIÏ###w×!‰$Õjüu´[GZaö•,›UX÷ZCˆ³²k§º$°ÆÇëSÜ££‘wG ¼ˆƒ»êšš¶jN¬ÇÛFn\}þä_ÇOhñZ•lÖ>‘ÂÔŒ MÓú(XJzq¡‘Öάúßþðþ>5oœ3Ûx„£%OϦÉ}Æ@•q~ÈaV#¤Ê'éoÆ‚A .–Ö3Ïž2ú­ %Õ ××ê&?\…)¯ MS›+ÆuvmT}›TÛv7õÇÄ ÚÏóó½8xMŠ9ÓHþê8Éh+%—†jªX™'8y€tœŽAuÔWüîpY?Œ%Œü|îžéä–%yj¦ªªºíG«©ÖcßñÝ¿›Vùó ±WÿðÓØ-º Uÿ\TWœl¶! °ä3WMEÓƒ¶¼Ã†¨{Zü$Þª+N6ÛØœ§4)¨éãcÕW¦ùFŸÝP¥ã¤ ºßRÌ:*biPÝ­ÖŠI5*¦ëkF›¯7×FÞÄÌ8ÉÌh‘ž¤ ÆSLtœŽAu€¥¤õM¾© ßwãÛIL)D¶÷Fcp"A,ψ4¦"^¯„%O†Büžžyª©©áp8lvsçcÇŽùÌñï5æ3G/ƒCDŠÐãwzbZ°†4å3,OJ%H¥R•©ÊËË«©¡.%%eñb–4ˆáÙ“'Âs€L&S«“É>|xaa¡êã á5“éê6Ô”Gžšžª<>”FÓu Ruùji»ç ©T*‰|}}kkk‹‹‹sssíìúXÛØXZõ¢Óé€êÂêi„çƒ×ƒçÿ¸oÿ.T‹ Ñô £ìÌóÖÇŒ0ifë4vš¯úÜÈîáxzzÎ~õU +ëŽŽŽŽ‰¤¤¤¸¢¼¬¡¾!Ä`0èt:Ü=`£qóx ³3a `˜övõ|í1vR­™ÎëÚלK‡ç0Æ7ÔϾëòDQÖ EQeGÀt‹Å@JY­¨ðB¶öHLŒ A¡^ƒ¬ØlË!/Ï ™ÀPŸ[èŒA–l¶¹ËŒy!ždìù„`ïз¿½£ó`s K++ûNýúõóñõEI%bLQ“˜ñÏ Þ¾Ž§÷ý§¢AAQO˜óKGB2y Û0%ÒJÊñ ²±(I]QQ0õЧ¥ÿ«4H—úvüm~êÕtaÌ10dÒ”A{Sÿm¿[€=Ú¿¯@@•ð÷´±Á¾p¶9*èþcªôšüciã‚}!'ÜOÝnBÓhýíZ†ofææ£Æ¾$S(‡ I£ÓBÄnznÐÿBî=y«ýuo«Jêm½Æ}½GúÌè[~ì—½ÉúVá1ËSÀ´‘W…·ŠïÕÊ”J¥’Ô==•ÉâÐ ÿ<Äÿ8ð¯óXȉåð9€¢yPÜõßûæƒâ›¦:±8÷Ÿýû®p‰ø‘ÎûçLg ù‹kâk Ѐ$È t!0VÀ87°ø`ֈɀ2A.Ø @Øö‚JPêA#h'@8 .€Ëà:¸ î€`Œƒç`¼óa!2Dä!UH 2€Ì d¹A>P ECqB¹Ð¨*…*¡Z¨ú:]€®BÐ=hš‚~…ÞÃL‚©°2¬ Ã Ø ö†ƒá5pœçÀùðN¸®ƒÁíðø:|ŸÃ³@ˆ QC â‚ø!H,ÂG6 …H9R‡´ ]H/r A¦‘w( Š‚¢£ Q¶(OTŠ…JCm@£*QGQí¨Ô-Ô(jõ MF+¡ Ð6h/ô*t:]€.G7 ÛЗÐwÐãè7 ††ÑÁXa<1á˜Ì:L1æ¦s3€ÃÌb±Xy¬Öë‡ebØì~ì1ì9ì vûGÄ©âÌp‡+Ç5áÎâq¸y¼^ oƒ÷óñÙø|=¾ ?ŽŸ'Htv„`Ba3¡‚ÐB¸DxHxE$Õ‰ÖÄ"—¸‰XAàPð4Ð407°7ˆÔô&Ø9¸$øAˆnˆ0¤;T242´1t.Ì5¬4ld•ñªõ«®‡+„sÃ;#°¡ ³«ÝVï]=iY9´FgMÖš«kÖ&­=%ÅŒ:Ž‹nŠþÀôcÖ1gc¼bªcfX.¬}¬çlGv{ŠcÇ)åLÄÚÅ–ÆNÆÙÅ퉛Šwˆ/Ÿæºp+¹/<jæý$.$…%µ&ã’£“Oñdx‰¼ž•”¬”TƒÔ‚Ô‘4›´½i3|o~C:”¾&½S@ýLõ u…[…£öUo3C3OfIgñ²ú²õ³wdOä¸ç|½µŽµ®;W-wsîèz§õµ  1º7jlÌß8¾ÉcÓÑ͉̈́›È3É+Í{½%lKW¾rþ¦ü±­[› $ øÃÛl·ÕlGmçnïßa¾cÿŽO…ìÂkE&EåEŠYÅ×¾2ýªâ«…±;ûK,KîÂìâíÚí°ûh©tiNéØß=íeô²Â²×{£ö^-_V^³°O¸o¤Â§¢s¿æþ]û?TÆWÞ©r®j­VªÞQ=w€}`ð ãÁ–嚢š÷‡¸‡îÖzÔ¶×iוÆÎ8ü´>´¾÷kÆ× E ðŽŒ <ÚÓhÕØØ¤ÔTÒ 7 ›§ŽE»ùë7-†-µ­´Ö¢ãà¸ðø³o£¿:á}¢û$ãdËwZßU·QÚ Û¡öìö™ŽøŽ‘ÎðÎS+NuwÙvµ}oôý‘Ój§«ÎÈž)9K8›vá\ιÙó©ç§/Ä]ëŽê~pqÕÅÛ==ý—¼/]¹ì~ùb¯Sï¹+vWN_µ¹zêãZÇuËëí}}m?XüÐÖoÙß~ÃêFçMë›]ËÎ: ^¸åzëòm¯Û×לּ302tw8rxä.ûî佤{/ïgÜŸ°é!úaá#©Gå•×ý¨÷cëˆåÈ™Q×Ѿ'AOŒ±Æžÿ”þÓ‡ñü§ä§åª“f“§§Ü§n>[ýlüyêóù邟¥®~¡ûâ»_é›Y53þ’ÿrá×âWò¯Ž¼^öº{Ööñ›ä7ós…oåß}Çx×û>ìýÄ|æ쇊z»>yz¸¼°ð÷„óûþBybKGDÿ‡Ì¿ pHYs  šœ#ÃIDATxÚíw\çð׎sûc'—O®$¹8NüúÎÎ]’»ä|‰#ǯ-ǯcËVGFÑ{_X:H„D“@B@ Š] ½‰&Ñë.maaa{™çI¶¥­3Û˜Qæ÷‡e`v÷Ùï<ó{~íù=@Š…B" ñ’xI!ñ’xI¼¤xI¼: g¼û$K2¯a‘¯Êä’­o. VC÷ùßßø‰×P"½iË0kf 4×_3 q†„ÄkvI§ÃOý Ãi¶H&ñê¯Æy@&TVðph6DâÕSD9±Ëjþqåð*‰W‘\wìÔ0Ge僉WwY+Ò¨9Zw݃I¼: ,Õüwy¥UµŒÄ«“ÈZ´¯|e§–H¼:I•õ(Š«äR„Ä«ƒL:—B€˜B¼ðX‹¹µDâÅ.ʇ^~>!ñOFͦH¼EPÍE{©0¿ƒÄ‹Q†öò0èi/Æ…­Ð_QÅJ¤R‰T¾)¸7(pWvéγ¿ŸûäÏï~d–âwQÑ·à I¼ØD"W@(dçZ ¬ÍW——NV¸V”œCâÕ[Êí›I z_Ž¢Þ8“JâÅ$È:¢/X·kTúSj‰“ˆƒÙjðù¬+ûÏÃ$^L"Ü?¤¯Ü9Æûà 0{=›”~wÛ†ÀØ‘.@âÕW䕃J¿ë<Ë ?eQÙÃë_%ñOúé$^ãyxEÞ0sç†Úï H¼(µÃÈÚKEÙ•%Ÿ~ôÑüí¯óa¯–¹(—ɤ‰°”†¶Ê æIgö¼DÙ×W³ðË2ÒÓ’NŸHs¯Âð²•À76ðþ$±a\@âÕ ÓþîË/¿üÊOâ$%¹è"c¬ÉM Ëû#廑޾1W;ÖH¼j½a‡7¦áÏO‹€œ'GófØÝÇ ·gÿË¥@Ô})Üß;¦r™Ä«‚­l4þϯR^xû²èÑ(*ilø×i¹kLbÉxMz¥wÞ´ &ñ>%²Õ;^f‰mï¿ð_%‹sÆPä€fÌ>u{¾6;–‹Ãͬ3ØbïcYë»hi[È”€ìí O”®ôœã¤Ö{¢¦L .74;šP3Ê&ñ‚É›ð.Éæô“[*-<¥qyã7izþXÜ—æè™Õ8÷7WÜëM½2«Šß€¦åm*Æ€EL¶¦¯èAŽ÷‘Ô£¶¨ 5M™/Ea—̼Õoÿ~7m>=0LiM ‰ŒŽw¹ô¸»_×È_ûml‘½Ó(;Ys©qC'ÄFÇOD Ñs õ«ÄKß–¶œ hðñt†Wq€Þ‰S놵Cî¦\ËÆ|0VyºŽk%¢ ï‘ Pኳ®Ã KvÀŒ~oÀ}jæSÏÆì‡Ê¼#´\ñ-ÞùJfE¯e+žvºí”#Wqëp#Síú¼|®2êÈñA]BÖÌ·ß[G‹·ï¡Ò‚9Ÿ»1-æ¢N*š’ú áë®rÏQÝ/êùËúÇ× ÑâÕàI+Ç:¾Ä“vˆI×ѱ6†¸¯Ô5Ý!5{áEg-^ý¶éK)žð^;¡KÐ bÛ9gõë^:Ðù eÛ J¼È0êÈ\Y„+7¹9pókãxV/êQz Gþ=…òÊ9åì<ˆÚÓÅÙóN0¾b©!æPh¿H¯I²¾o3‘á"@‰Wˆ)m¦ëDzìÓfôýT~Šåë¶—Åh—¶á+èôó¾g|ï£È¡Öh÷ÐbÃdïÿŠÚîÝ4P¥}YáùRœá͉E™÷ƒ9eÎ.i†ò’Ÿ"X 3E6TžáÇ›vèrFÕ3]<›uÈ­tÆpµñ.ävjµ rñwføÚn¦ÃjÇ)³àNž!cïñ"u‡ËµO‹€­Úµ“7CŽž™4°Åƒ/U>›‰ 9‡‚Ä\ÑêŽ÷òË5¼NÃŒLjZ¶èÁ~øÄ[¬A¡ ª½]yFø\ìx7¤®ž¯úUâŒóøÌjP·(Ë–®Zºæ§N'¼}¶¾Ê'AË dà5î»¶Su:“Óvo@Óª±– ð"œ¼lxKOMÙÃK~½8NÇ ÜUùmŒòp‹“cF,>Ö /xT¡P^´_Æ7ßz»kNîç»L»§Ù x'ß§P(?¸g¸bzÌ[vtSk0ƒàíúïüó/Ìp|®êJ[ì~_ëÓ{•ÁËŠKö‹—ãÖ%Ë<šÂE_ÄÄ»!eî8e+k?ååW°¹ÿ|œŒ°x»Øx„Ë+õvOlyÆ+gï”þT¯lá’…÷ͱ¯—³ÿÂâ]ÎÃÜõÞS{BÛØß~=¶{ añ"çNà .½,ØâôÔ31Ä«!*^PÄÇ [ùý×%7çlš˜°x»ÆŒ0<áto ÈXX¸ˆ*=bêTd¥ÊÃØ„Å»èoàs«¡u1€*l½Ò„@˜²Ï½Ý~héš•÷•‡*+ÊFíç‹ Ì5(ܱ3–½Y›šcÃfÜÙ0LÓÚÒMü‘Óf L5±}Éþ!Ââ©g ©ÙV<㟞‚›¥?C¶'jíBm˜Y܈†[àQ ou¨ÁZšÎ²¬b8Â8uBí-„²¨v—5ŸÎ–š,#,Þ)çQé…;j߉#*{e‹îFx†Uh³]ê¼D„Å+=j«¾k]¡)ÇX¯ÔYÍsôÊèÑž˜œÛÉ%,^šcóÒ²rj÷Íg>Güß?Ïëó?Ö­¸è‰tÊ=qL¼¶^'S*m­Ö¬^ååP›÷sA…'u2ýR≋—UdàáõÚvARdµÒÁ%H U§7̯,]¯l<R‰wÕu¿‹y‹"ß;ºé÷ :añ‚R_}æFì˜J¼âÖ¾y_7Å7^îÐé3¦h]ÄÅÛé§O¯ûZD^Ѱä¹(Ï ¢–/#¨Q±DÜTÒ=Ví€xnÐñš¡Üc—ÂâG\×c, ÊÇlOžp£“9UÑ=Ô±ÿrö)añ‚äÓÆÈÅÊ”ÓÆ+V:¶óo¢ÍoY„i6° =æ:.¢óv#ÄÅ;¨¢zfS3èýï,?²j!.^c“^NÄ:ÚKù:»ß¡¹rÂâ7ô¨3â˜`wÉõX!qñfÅë“o?ƒn“:gB÷Íì]ââm÷Ðgmƒy¨r“Ü3¹º{‡k_.ïú>=³ñ(Œ~øºC÷€,;Ââv÷ôüb©v·z>]Ÿ£‡¡˜+0qñ&ž×ca–0f ¬îkûöXŸ;ˆAÄÅ{ÇS¢ãP„ƒe'Þ ^qÓ˜‡µè»ð?8 #.Þ¹º-ÌÓç¼ÿô½~Q nMËÖr’»¾Ù~ÖžEââÐe+æG¡P^ y<õÛÔ<¿’KþƒújN®WqñJ£t:ÿHrþU å…mOR ÍVÇ•bˆX+ze±rK3âà… ‚t²x[?üå§_—ßʦ“36?}š)ĬíCØTp‰/qñ‚>]VQÑÑøƒ/:}›’m,`ƒ)ÙÍ<€¬ŒôMJ;Ú.×@‘Ún )qñÎYaw,ãì@k ‚É»’9äA¾Ù\ œ5X$fÜy‚¸xÙ!åX_Òãªæ–@RÏ>œdÔX9^4Õ}·¡wz &^ÉyŒ§[HJ,³Le¥³¥´÷w8yzÚ>SË’!„‹”¹aš+§-MeçCË%¶»ýs&é ¬åù™ñ®L÷ž kð‚.w f;òЋÊ0Qˆ™Îsp®[=eÈ‹E–ÞeLáDßORj“&0ÑÔ»{Ö+Û ˆ ÐÉïL¼ühÔÙøÕÓ5¦Š¯,Å9V«1>×m3DÁ‹¤¡,^DF܃L¶MrÌ%\ƒ9Öí$$^p+ UÔEZcžaªö HÿÎŒuM,ÄÌoošl<;õp½É¼§žCYZâ¤Üdg:1ð®z7h¿hˆæ3f²¤ÌŒ}žÖ;É‹Z#^§ \nÖt}ƒ8q(‹>™BàMOÔ²Np“ìÊM—2@ ]QÕû Xvocxã½Òq€¬Œq%*Þ™ô¦š²iãÝm¸(À;^ÉÜ8z¹ïßX®Í?`XÁ²•Ö[&¯˜.”çòIY;ØŠs¼çì6 bé† úx€ßØûô×[»p¤Ì„U^&Š=Úšn¤ÀÂ7Þâ¸.Evâ<‡„o›D{<0%\€´A½q`þ«Iüâe7Ã@Õ¼„û†Ÿ| Rëoâ.ºÐÉ Ô 0æ2^ñ"ƒnEê#°ámRå]8zËÔm2%_)¤®©D*•È,SÒðm¼âexfjªÿ˜»6Ϻ˜üp¡…Ý öôžßÝö‰ßb5Ué|+>NñŽj>áN·¨s9±Gß´;+¸ÐÝ*…½vÕ¢>ü¸¢1î>ŽO¼°¶9tõ³[q:ty¨’ÃuÊ’šfAЧâŸf:ñˆw:[»Aå×ɶïh%Ã똽¹'þg¦¢ö]¹‡C¼ßD4ÞŠSÝ Ž«Æ‹ð¨6Š“b!¸x‘|wtñx ¦oE°P^þ~0õ¾b,‚AëÆ^ÉJ[v2gÝäxïÛ+E˸vY@œâ=é¢h'Œºöö¹O”Z»}9¨Ðäx—¿T p“k²Þï«tà}·•[ÐÊ8Ì–÷™\™Éð‚ë4“w‡—Qc ȸJKt“  õéžêÜsz¦.$ƒ7ã ÷ý$,òxBɨ\¼-¥¨UêR½É-_èìYÔw_^TayhÇÿ¾ñÚKwô s Ç»zG$])¾ÓÖ^S˜}6ÜË-iJW¼ðÉD<ŸóŠô~…º²rzÿÒÿ|—²yú\ŸÎQ=épÄ‘³5ŸzLW*㜔é„w)T!Z-]˜š™™ZIXl< ¹î¶¦Î …ÁBÐÏ6ðþ‹}zëÜš†2q|oê„R`E8¿7¡ Þ¹<…-ütêg¿þõgþCçƒ+?G4fò“OàR'”I¦ÙæŽ.IŶ—_ô«J°<œY7ˆ1¾'½s4AuCa=Ò¡Ò9æðŒ¾‹ŠBÀíƒwˬ;Þ0ì¦L>}—¨èŽ‹‚â£ßû Ï·ŠÏ–Å8zE$÷¡¯ßä¥Ú–©ò’<›Ë°aðF ­ ™t¸«ð—5‹1Ó«‡'Tš&›¯3€Üš'E}²‘[ !4ŸcC¨ôÛÏSc'·.·tV¼ü%Õx!˜à¥h(È,LhŇ 0X& T>Õœ¡šKQ.Çn1¤fdÙÏKKJz&èØ:F¼MáJ¼HŸE˜²2Ÿ•˜/Xñ8§µ„l94V­K¬Mu\ ÜwøTíGýø…Ç´îWŸ§–bÃÛè¾®ï¸í­­@ªž™6iZެ˜÷¡i^Åä‚•î λÜSk‡˜*¿—ü²; _dØ¡„·/D©ø ‡ÁÉÏÓ²®Ïà„ï„¿Æh°·7 †Øgí탓‹º”+®»PÅŠëF0á]ëRš½p}=*‚i´EMÿ ^´5|\Õ{¨Â!}ÉlÍšÀ±«]Ï<܈TÞ•<ú”XOËA½D§n•Âà]9zMu" a%Ú•aó ¤ãµ™±T;ÚµÁ¯Ë&+”&òìÓ¹Uxô>ñºgí–éyõðMe?RÆLÿ*R—š,ÞTG~Œõþâ)¾¢ +h_˜%Ç‚wõꚦé+s`ëdýž»eî ó)õÄŸïÍ0 íѵ´ æ/æ첎꽥ÝSø@,Xšè˜b0ºÄÛ¥`.º¹¹¹'H¤PI˜‹0Y*W *óaä$–vÏ %r<_2h­àv j w8V´BHù¯”˜±ŸÂh˜‰P(°±Ã-Rßþë¿ýû|¼þœá½KS «I|ÎorûRÓÊEÿlxe÷r´.œHu#˜/Q(”ÿ¼öœá-PJ‰¼Rà$–|Õ¬Þ©Ã8{WÂhèL®™}ß¡PþþÒsFEòUá•-渚”`Õ`õÚ–Âj5êÓö†'ÿ3ô×)¸œ›¾ðV…pUá庙¾m­¸Ãl)° ³S¼ÊF¬Þ¶°ú&°Þþ§´DXg±Ÿ§å­ÓmM^yßêÎ(z]³îu‰9°²'ո߬}ðýj!à´P-KÖžÀ‹û”’  ÌGÿÚ+å0z˜­ ^AÚWוýK6‹ÏÜÛGTù÷ìœ+Wž¼òJ}-Ác»š§8å{Öˆ.xü0‰d‹O›€‹7Ã΋Ôx¨·ìhåω…Ç^DØ’žM:áÝ`†~n`t 8m·omøÊuÁ×Ô{ÌÜë´ ñó€i¶FÈf[ôéŠwÓ9›*»Ø©¤cgGXÓéX¼êåÙ$ø®[¡> ®ÖV¬ÞGˆ/³¸hŠt¹¶>ýRÂã…r¼Pα]9¢'^L›¿| rPDt¾ ç:tÞr}jm¢˜âÎÏ9t¦Ÿà*ÎwEÕ6ža]ƒ˜ïf¸'Æ!ñ±ù®†A<æD>Ó[ÂDxÒsÂ1ašÐ|G습.6ÒìgÛêSL6:I÷‰#iL˜À|›-*µh8q–õ³GIQL8þ/ Jx}~^OâÅ"ÓŸü?¥PË5gs3mä)%¼Q¯—x±ÈÄG;Uâ’(e¼q?»AâÅ"ó;ߣ£Çøz‰‹p,ßTjdÍr¹¾‰72N ¯å}$^,"§½©êeG”nüWšrQ ï®w˜$^LÒyu]‰âãóåJ%Hæ)‰—tŠI!ñ’x !BE$d¼WYu¼ŽŠÛH쉻,þî&4wÁú‡e$^ìåÿhCëU²Ó¯9²H¼:ˆ0ðgmîtå'Û†‰W§ÀƒåkÞš’çýâ:„Ä«›L›¿¢Ñ|(ýÅ[•8ÞV„w»wÜÁ_cæ'·ñ¼+÷n…Hs“Dþ*®›ÇÃkC¦”{_s& Ð’xWvýÞ±bU,{²†AAßñí¿k#ñF¤g~ÿãykWDÝf-4’d½ñ㛳$^‰|½Òíÿþæõí›…èë¾?}û2¦¥äÎ C àæíÍ‚TyçÕæ%‚Œ™ H’xI¼¤xI¼$^RH¼$^/)$^/‰—”¿¼2—Ïÿúñ§Ÿ;ªÎÑ{eצN9U†xQ ÜÓzî½ÔÖ®¥™91q+wv‰+@5Æ2,L³ßlŠ(HrÒÅi¦È—¦™2`ŽÈY"/*iÛÕŒ´˜mÿðœ¨Üý”÷}ß/¸¤ û\v˜ß¹÷î+ÔýáMGîÕÏ>ú¬ª4Û¾³Ø]:? ñ¢Å+óŸ¾òé\Þ/Ãý,ûjÞ󢻜œ¾ºc4bwï†ÖX =Ø1n{‰fµz$b>ËCnŸQ²¿‚T¨ñÂÝ=õÁÿE¿þ!]üQþԎߥܼöîl‡Gm.zó8÷Ë-¿àSw]î”ÛïùïxR÷¢Ç ]´Ž úýê!héýDÖþÒÙÓÓ¡é²ý£¼8ËÇœb¾D&ãþåßáW‡¬8$^Ôx—·Ý^mØN¿j „Û €ÌÖ«ûÖ•ÙëÌËö޽¸àÉ«üôþZæžµfúòí÷›NŽþõ´”øÞó±Ï¹g¯ó;k¯½N’¬]EÛS´ÓE[pv®h‡‹6¸ÆÚÚ[´igÛøéêf—9Ÿ;¸Ý¾¢u›¿GÍ~†8í°ÖÑMìlÑ )v­h[×P{LÛ¦øùêÆ—Íù\hÀ6;‹¶¯hWÜoå™0ûãÔÀZg¿¹q)ÚnU8n>¿ìn¦¤Åöˆ oàõd’¶;ël˜Sk³Ñ+’n Àé­°L·[®?§@òËwçØ_Æþ‚›z%ºrî×/××àß®·†6[N¼äþ2ÇzýÇ=•"’œ¿Í@•b<ÏõP5³ÉòPÚöà;劌9ÓÍk$YÎQÚ,;ùnÎü½+)Î9›¬ÌEÑÍnÆÜ¬e—’Q­˜@:P´ƒ®íþŽ&7†y’  ‡Šv¬»Óˆ+åZ]5ß]uËw¢ÀŸòiŽ˜ýª½ÓÁ0˜ãxwÞý¶G‚ïbç[јù¤<òãÛÜ™ówï7ëêøO™cIr»]û fÙ#„´ÿ+Á±Îå8Æ=¶›ä¸žfëŠ+ØÜh/¹›ýDÊ“¹¿‘>·C+Cææ^07óÅàz&ˆ†,šï)¸v†É‹š…`ÙýÀ(¨àÄK·;Ž´ü«9ͰûϱßÁàXC;l–ÍÊÏ £eCFp]rÂÀŠ¥C9wéÙ’”·MF–·¿ïµà·Zpçr:rœ Žq:ã-&哪¹žê¢3ˆ¬"Oòv¨ë@ 2<^hœ‹ÜÐ îé#6Ìç>êrÔ|ægSõ˜e}T(6›ˆÈÕÈҶᘋ^쌈„#®-Ýîÿ¡0±ûÕMyØ–k&*Fæ–ܱwº6ÎE„B5iÒ»ŠÒ±v¿OžßÜ Ý î3›°½‰6-™cìqŸï2íÛëDîaóÙ¨¹vbÇ8ˆÑ~×¶ ³¿Ù*¯§.º44åO‰z,šhÒ@$b2‰ôØZxÃ=”¬ÌS¹X‰‹nŒ§ìÿx°l,ŸjÁD=Â!0ÿÝÅÈù¸h¢4á~§ƒeÏûD™¥Ï|w¢d#~Ýùé®â·O‰Y‘j“µw›ÏÃ;X쌈ΠQ2›ãëMG¶“ö{JjÏ{ÈÍ'~l^ÈHtÄߨö›hIoD€ÌÛÝDÓÄGÈ@’>‹m6C …"­»Â>§ƒ(W5ûµ‘‘]‘mû6¯A õ‘;ìu2ÉW·ê¤Yï¨ÛöTRIIŸ 0kÕ\O53ànF‹ ´‰¼gRn^ÃæF7“"ªH'š$Â6tV!ºš$ÎUHý)ûìq‘ŸsI<Úדqþú’ò¤ñ4³ÉÚö÷ªTä±”5ƒÒß=Œ@€Vb‡{Î'+g@YQp*ˆ¾ø|;L²­t)‰ŸèƪܚÃÉò4ðF$ûÝÅàØ;“å!¶…„Ù@†øŠ}7‘ÄË)lNÙg²œÕãÎû™ŒßÁ²/øÍg³ù<“±²3rì'ŒÀ®F Y–t8ž¤±! éœ Dи»1kV“µ'XïXqˆ“¬æ†f“{÷»m ¡r¹†HN%t0)jêuv4"ªÝ¯MÒÞíÚ®è‹MÒuËnMʓŷüLdŸþ³kA4gOd»1üoz5‰'4÷šÓB$êtÑí·3)ŸÙ¶7òÛŽšß7&l.ÔœÛn—;_~§k¼žêB"d1Én™‹D—FƒeŽdDhòÜÐz“òœ§ÐvÕ(T²Rw²²žPÚqW»_‰Ì¬iþƒvØe€¹j>÷ûÜ’,Gï$’Î&å¥ f3~ëÑ$_®—ÍQòù?û3ŽeÞˆ­°:÷R†@ #E±áÂ~¬îFÖ Zp7ò´RͲ#‘ï}a¿0¯ho²\4ÐÞûݲ¶¶ö±3X&­Xå ÷ù\Ž6Xqr0YYƒi*…Õîן×Áy•˜Ø‘"^¬XSdo8)/Ž˜‘t:)Ï%º’,—*Hã 9†±ŒåÆÍra½&û›_sÇgg I(JÊ#‹=Iv¡HEŒìÐÞU'àú꼞†DO…ål-‹ nCgÎ64š¾¤ñ¯±b©'gzªOþ'YôÔ Vz"@°NvÑ›ƒ4Éi€vÆŸøBŠ=œhg”¬«\ «)oiS`è/ÚTÛØë¶³]EÝ`ç¬3rü\J°ÑwÎ Û6Ê%]Õ~¶h…:·³à¶‘eg‹Ö³ÎÙVwÎÂãŸår€ÄŽŽ[_¶„HÊEoÑΘóÖ`[ŠvÊ}lœ·Yw¼–íîA x@ÅxˆÝ0h pç› ÷ ¿[çöì¾ýÍžáŽlt~.91y±‰ÉÓU´+Î6¢@P±¼ÖÑq‘´ŽÒ}£; ¯xÛ!l›D)~Èk¸hÓ-H‰dv™É$>,5î>v»¿·¸þ*Aw­hçÜ 6o”bÆ´Aÿ/Ú@DÈí/Ú'u\‡“å¡ÁY'ò®¹ÿOºÏO¸å,Cn‹¦½{’òü%¬ƒnýKnÙ¹¢=Ì*ÆC, 0RkÙjþß ô°e•ö;á>s¹¿çÝ~9»ì>¯4´µÓ-wÁ‰Ÿ½NМØò‚Åæc,Úî¢uBé¼û~Ú ž«îÿ»Ìñ[±7â–¹âÚº×lûDäXç,Ú·î’‰ø_  tPŒÎ×b)b³ØfÜÍþªµ $‰™î@„\”&‹‹NÔtŸsëB*œÑ·Ï}¾#cˆ-H8JÙç¶àXç‚ÈÒv÷ùAü/†:(Fç[ÿ)f Nlιß4t0e¨°Òì±ÁȾ·=»/äB!ÕíMN4ì¶{$²Ï>¡²Çº;X®×}>ÿÅðÑ@Åè|ë_ Y!°ÍEn”W3ZÅ~ÓÒDÊ~óL¯ïw"ä°B€3QŸ<ÉוÒÎ ×à%7œ;ÖPü!0|4ÐA1:ßH‰‰¦\sÃkõFjH{\dÈç!I Mº¡9»Ÿ… ¤‰ × Ý ÿ‹:(FçkC$öºï|¢s¥ýîn @êwâè‚‹j­èûf?J O+C <¤‘œÉ·w2².מ3$ü/>è ¯½’¡hð ËÁ²3 H~Ý©Œ6ùýp?œ²9R·‹˜]HV¾’d"h ÿ‹:(Fç[‡I7þ¹¤!9¹–,ÏìòâÀ×8Òœr„.7!‚tÑìwÀ ¯ùa7_& ÏE”7µÕ-§}Ï»Ïû§¡»Ñ”vLºeN¹aÆ×þ«nû=$ü/>è oý ¤B’//'O¡È©¤|ÖY§Ù··y'Xƒt-ØÏŒÙ.mqm°Ë^LÊËl5âêtF;öGöë‹B& HÝn=ø_  tPŒÎ×z#Ã\‰‰¶ôçØFÆ6#ˆœuûsâ¤Ó,×å¾ïrw§ì·/ç1Ž9ö[8hSH‹õWhG·[n,YYÉo5<÷½9CŸçuÅý¿ÿ‹á£ŠÑù?¶2êµÿ‹á£ŠÑù íýØ/÷=§ð¢gÝöŒJsÁð þÃG”ÎÐ^~ìÝ/¿·ðïcƒ…wþÄ=…çÝvËu#”ôþ·Aü/†:(Îm)¾ø†—–löuv <¯pÛ¦QR¸^œÛƒÿÅðÑ@¥ó´¥@òöÈÿyqièÍD“ÖR"7þ $ŒÎÐzäíã=PxMÏk-‘ÿ‹Ö†oܵÚm3•’·÷ÿô ÏºÍ ¥ÕLäÆÿ⣄µYçÛ‘”O»Æ°¦Û‘WÞWQ ÉL"·ñ¶‰Üø_|4 °6ë|¥ëj¬·»ðößaM·?|E_.qdm $rãñÑ€@ÂÚQ åòÀ°Õ¶ULäÆÿ⣄!0lmÛ*$r¯9ÿûäåïWm|~¾åí<ø±ZÕý#H „µµ0‘{ÍùßZøÂüµ¼6óo«º C !°¶4ŸÈ}÷m·š˜È@B  C aØú3“È]¹;ÛÅÿjË33waM´ $ „akÀþfËý…gݲÉF“ÆHqûé_y_ámô‰Âï¬$ddïú«,¼iÿtæzo9ð·eëL~ä3…_ücU ¤7þöŸ—öï-m$@ !HVGIeºné°/¿=N)n¿ô»YøÆ·ÿ;u8î_žøzá¡Ýï¯j%a‡ë¤ $‰¡ËßÿŸ›ßi»¯ßû!|4 H$ k„ýëÏßFÏîÜd…‘*v·›ÿÍ+F~ýO ßùÞSs–þþs_N]çÇׯ—ö÷½«?\!’* $ !+´$”ÖRô„@Âè|$lÝÚã¯,ì{éóUeÛ £Ù £ )4õÞ#Ñ£!3}þ ¿õáÂc_ºxó;mϯóÎcÜüüK_{²,R¤a9ÏS?xº´4¤õ´¾GŠLᣄ!0¬³Ö‚×Hµ»ÿ­&‚ôÖ?øx)H¹?ö»]ïýdYDÈþOÍÝüì[ß½RZO9Lþû·ÿñÉ2aHFžûÏ›ÿàé•Ú†@°:„‘®Óûï¸Õ&`ŸoPv[Ïbý~íð§JbækO^Ž ¤P8ùh‘"NŠH¥%v[¤a9Ëß<úE|4 °µ-Þô‚»J7 k¶ýÅ«û«Gïû©Æ„Ñ6üoíI3ȦO!3éÚ $™ò‹²øÊÂ⊈”H!Oÿh)uÂG ´:ßî„·Ëc-´MÉ3yE’„Ñ˺o_)2‘4ÿÝkZ mÿ½¿. oÙÈÎü·¾[Šè|øÓgS’ÏE²9D!Ú–¶Ÿ%ììµS $lÍv¾Ýîú°fÛlž!ݼª¿0|WÙ»Ö.­’0ÚiîËß(Kœ¶ÓëÊH޴ξ*|ì±ÿ(å$YôYš@Òp\¸Ÿ$ŽF烶öciI‘%ûÚŽ$ùNñß½EëÂÿ6N Ùè‘"Ba„(&T R&ÕGRrw¸M}Ûw˜¤í?·ÉÚšÃGÚ꠾ j^ Ç–WÃôôáÛcø$€õ+>þÐ…±Þn+Œ¾_üwr £ )l=£Ç¿úÍ«õðÍ ÙvøË $ÕD²ÉÙvö™jÙ}ÿÉß}.W¡H›´ 5|4¬j­]èkɬµEç¨N ÍŒ ” £MÉÿç–é!‚ß<¤äìp6šMáL3%sûI¡xR‘Èð3µAeò¼jD)›—VáÆG ÐV~L/•ýå¾çnéèðÂè‹ÿ^ƒÂhC $ ;$f‹Fªž‘MÂ~ÏG?[6óM"' }ÎJËHRgžcŸù<>Öž@Ò…)Á‘ekaH‹!6€õïÇLÄÈ £¾vó¿Í0 ‘ù¿æ©N­ååSURuüP›†¿üvbõô½r’´^¥—ÕÚÒÆ|¶¶ï¿_KÉÚøhÒMÖB~IÚžI'Œ®oڴ鯋ÿlWÿ‹á£aƒ $= (\*ó/'ÔÌ[t쉯»ð›ü‡èú*Q.¯°®Ê·m •ùï ޵GãÚÚ¾žŒüôS«kûª›Õ[EVcâJLôcðt>€ºéMn”•lwÿ‹!`ƒ $[ÇBãÖvÊfˆÂ¯¡I[^BF54b/KÌÊA²í‘²u>BJ¶ÛTXY39ÒhÆû‚è|$ ë@ ©ŠªÀÄ,Œä„…¾ü6T…UÉv&„¢Dv]Ej,ŠÚh= #[Ÿ£Vd…’ÆÇU©5üÜnÓ 2µ[Q#­§ã±³5ìÌŒ6ì|*Ð7âÚ«w_u7p»cEÛç®ñ=EZ'çdlµµQÇ;ŒÿE<  ­R%B¡ ½°Ð¾ÙYIߟüÖàø²ëi¨Lïç©W …38>ùÏO”½ÿÇVƒM«Å¡vi˜0VÛ£Í:ßÖ¢-&富¸æ†JêÝîÅ$þJŠ“ aÍB휮QêÜí]ÃÇ&!t"r¼³ø_Ä HU¤pˆ.|ó³ÏãÑL‹Í#òf#>µ ¤pæ…r¡bÅÏlÕXE‰Â¶¨”~ÚÕ6é|CN Í»B§»yιöo¯q»»Üú EÛ™ÜÈM›‹vÜ}7—¬Þ«%š)ÆÜºSkøØœYt¬“ø_Ä ÚJ IHĤYXÛ"$a"sÚ÷•Ö“úÄlÝÉF¬²Þ/d#]ÞSäÉšÍeŠ ¨6è|‡\;á•'œj‰( ºuõ~­´éÝ'Ü~w¯ásÓn ÿ‹@B Aû ¤z’´-ÞõWÿX·@ªÔ^ÿ¹DP5´açSÄàtƒo¢ݱïÊXfÀíw"".¦0;íÚÅièj›‹~sËjÈnÔ}?â¢T³n[ƒÁÐÒ”‹h)/ꌳý‘ýÄR[ÿ´ÛþQ³ÛŸ6ÃUaÕh{|§’9Yy_·1èÎíéëos"TË͸(žõ5WœM¹6ùÏÃߣ7r¼Ã‘ßlÊ -úe%¾ûH HÅÏ5,k(,,ƒßld#H* P©X&¯ìf\ÈOYœwëV[5ù –;î„KÁ ÿõÂí\Ñ®º}͸ˆÕ’=×ܺ³nýEÓ– 3¼w9¹QÄð¤[w>hs(†\Tlɉ“ãnÛKf(rÜÿ‚kCŸíãîóiwn}[*åcmuûYtÂgÚm§àÚ‹Î]pËssßϺsä#„æxgƒã]L9Þ=¡ïrû½â~s‘s@ÂHоI3ÁlR´*±†%|í¢V$›ƒ¤’÷6aÜWÍS‘¶Í:_§¹y×°¾^«eXêTñÂàL Iyôv÷Ù’‰$ù™>ߤpøo‡ûüp†@:ëDÙˆù¬Ûˆµ¾Œ!6ÿÙLRžwµ=²ßóNlô¿ÓY·¾ÿ|§ûûH°¾ÏûɈ†é¼û-Gƒã=ëÎóp ÎbȾ„!`Í $UôåÞÓÌ–¯U Él^f”)ÿIÂDyNš ׈iþyR8‹ÍÎT³³ØÔN• ó•nºþfz¨ÆmjšóQß-¸HEgpãŽÍÂòÂd2H±fç\T)&6§/⬈ $NcCN³N`eý&ÛSÄêTpnN;QF¤†Ü0âPN4’!ÜFƒëc*åZJÖ~. „@jwTíL¶z’¦ò‡oŒ 3¶J …o”pªËdë ‰F‹\§¯ÛE8òD4²¸RCi>c=®MÛÌÍýlŠ0›NHS@ŠÕû9ˆ »½f8l:°“é‚‹ºLGìb†8´t%˵ªºßêjD<žËq¾+ $ž¶gˆàÙ@ Ë ltTmŽc#ó׋½ùÝÇ×Ô;ØH¤UH~Ú}XÒ‹›™l†@R¤(Œ\…Õ½akóÎ×—,çΨs[~اRîÉö $¸YO˜egëH11r C M˜a¤Ù›ÌH ÉrÎOše½Ðu[R^«jÞEÖNm¾šä›yXI íN=Id?$Ò Sô^Eyå_™ß‰@B 5ÄÂ)í•LÉÓ~]ååØïFãƒeÛ®ô½ï ÒÒ÷*äè_¢*Öiµ‡Ô¿MÑ ÷î«Ò÷^´I(I„É$ÎT“)öê6ë|6ù¸íÝŸTžÅÖŸ,'ûá­+¶7Ö@´%c(¯7²=Ÿ£”§,AL †ïªaÀ‰«‹nxËæh ÒEsNÃaº‡Í±UH;3®_?œy t¡,Àû–,k¤œÏØÃ. Úz \¯ÑûÙÕ ë+…3Ëþ副3¾½ºôºåÕŒhA-Ѩ¼u|.Ðገ…O îi @šŠ _-:!ÛÞ`Ÿ1&¶ºõvd¤#ǧ¡²3IzÑÌIzÞ”/èéK¤å:ù¼¡ýFHe $¼3‘}î¶…@jp¡Ù` œéÝkvøÊ¿©Ò¶FS4 ´ªøš='][C³9(»Üò£9¶ë‡h.¹õúœu"#¬¤=èDÚEÝé2Ââh0Çf2ϹR2ì¹ØC !Ö¬éI¤Rò¢Æ§c?©åŒU0[ógÈ}ÖWÅö»x˜t×øž¤¼²ulùínY )mŽ,3’Äg¡%ËIßž÷ù@ ÆÜ6&Ý~rnÏïŸkãÎ$žŒÞㆢ&‚á.{|ûrFã<»öNºÿwºHR¬}N N™^l{FØŒ¥œW¼ûÈ늡±Èyè Îý†HÊi¬TˆVtë'¿²°X6»W‘ð%Ú 6ÇS•¡¨'Ĩ-Ьd=¸ÆP:i¦@RFJ;çáqëEé±H ´æL!b=±ØäiÕ’£;/ ZÄD’=; ð¿M›Åæâ(bcoòŠ…#|á·Ý§¢òšFïÓ$Œb‚F¢Ê–^Ñd}&1¡É+6ªdkÅÕ"ò ±)Âe“¸%ÌtœZW SÛ¬° ϹÖ}ÏG?[j·m;>šŠ!ZIŸëD þ·~$ÓÍ= ;³X¦<$+f$nì÷6š‹é¥ÜažŽòµÒ¬ì×"ò$iÛöÄR-1³JùRiç<ö:+C µRÞÔ‘ü()/؈@’‰æH$dYZI‘X6EQÂá"\ ‹Û*êâM‘‹ÏE²Û×ÿó_³Òã_ýæÍï=÷ŸÑ}+éÚ£<£´sަƒb¤Õ ôæú¢0úaRž`Cüo§ù+bæÓÄf¤åÍÛ ·a…LZ¸V ¤<3™U¼×F™}ÎH$ T ÝN=åEÑXowá÷܉CüoRLøè­áršþ¾ý Ëü+>š%l’y5ÉæJ)?©Òlh ÆÎ¹"Q$:(†@jšu´·(Œ¾ë…Ñkzî(|ü¡ _|ÃK oñÝ8Àÿ6X ihÌæÛ¤Mý ±ÅÞ`³G>?s‰šð{m×ײSý­@RžR¸žòœjHJF¯$rô–[º vΫ{$†@ª%]ïîH’ïXaô‘Wõ—„‘7à+”cô¥¯=Y–XmÿÖÐ’MBVrµŠS‚·Ýž¬yRR³ýNSåm]º0±Ù (í?&Â\! )K(õIHvøLí ‡Õ¾´™u$:(†@j•0š( £¯{aô²îÛ G^y_™0B þ·²@RH‚¢’Ùiûam"NTDÉŠ 0Y[âÇÖ’øñSëmnl*«b£Tª£$‘¢ýiæZZ´G/›µBF-íKBLÓë³’m‹¢Dj§"T¾=V@é8¡R{tNl[%mYC 5ÍD»h…Ñû~ê…QaHÇ’økI0¬Ñ6º^RµSý5£ÍÖ&²7{ ‘´éþ 6‡'†‡­ä«VÛýűb,¬g¾!ÁF»B¤HVlûöU+abzì8| „@B µB÷Âèþ;n-¼ûå÷þ}l0SÉÞù÷d½’ÚaK-I-H*³"'Œ’„Ã[á0”¦ïkÈ+&x”¯“VHÛˆ‰+ g)’¾ïÍ×i _Oâ ;Ú¢–¡@’° s«ìt}?”R>I=y$Ô Æ’å7¿î¹½3·0 E’"IÖlS\ ûPMþWIÒ¶Q“‘@²ŸÅ’­µ\¥e$”’è ¬³LÑ+ÍœÓz:•Þl }kXMË+òã£LZÏ·/¶ ƒ_OíŒ 0~9¿ÌzG§=çy“ÔH$ COß³^Ý}Û-…}/}~áñ×W'Œ0¬ÕÖâœ7ü/>HX›t>½ttÆ £îÎMϼãÁ^„†@Âÿb$Ö–oÐ £gwnº®Íìë䦋!ð¿ „µ]ç“0švI­…[7u\Ëý=# „ÿÅH$¬-;Ÿª_ …Ñ£?÷n² ÿ‹!HXÛv¾Ý6ûïæEÜ\1þC !°¶ï|½v–šê½wó ¸Áb$ü/†@B at¾"[‹6ï…Òð]w¦_s?7Z „ÿÅH$¬í;_§r[ôBé÷ÜY˜à†‹!ð¿ „µ}çë)Ú¢]SnÛÔQxëÏeF†@Âÿb$Fç+ҟܘú_jË›JU´«}½†!0|4 °Øù†‘Èý¯?ÿ`áÀOö•ÞáÖ{ÿO¿°ð‘Wõ7ÄþâÕý…Oÿì@ÃLÇŠHA aøh@ aíÑùêJäÖ«JÞ._µÝÒÑQxa×­ ³Ÿ¼³«ô"×FØkŸû¬Â/÷=§a¦¡ÜvzY-†¶q:_Í‰Üæ‰~Î Ý5Âf\t«Q¶Ð@»‚À[¶ÿ‹!HtP:ßj%r?S)‘»ÅC‘š†MÇh ´½ÎŸ5Â&‹ÖÿÅH$:(oU¹»;7=“–È@ü/†:(Ön¯b"7 ð¿>è X»v¾ÔDnà1|4ÐA±vî|ÑDnÍR¡þÃGk÷ÎW–Èm ‡Òÿ>÷e?SòÁØÆ·;ØŒ?£ƒbt¾º)Kä.Úº+l0v'”ShWçò§ƒbt¾zv‚¯‹î ÔOame<ìÑA1:4‰E›v6²ŽgÒË!~Úº4ׯ‘o{ øÛïg§V›¡¢ŒÜÇ4ëŽeŸ·n×ÇX¶)~®hSÁç~ÓœvXm7À¥¢õ! HWÑ.×lj:·Ùo¶5ùÝd“œzXM:‹¶ènVöFXé%5Ð$!Õ›¬zI»Ñ¸›xÔŸ³½ÝnÙ¬¶uW)úª8žÎÿ¾5´Ù³Ý\þ:¹æÎC¥s˜v} d¤$çv{›p=Do€E»”dG^öše¼]rëzæÜús‘õgÜwgÍöœ aãÛ®È 2sºanšV í¢YþœÛWiö¢]uËÍçé|pÜgŠ6´ï„ÛŽþÝìw>Y™ß¥ãÙœSÏtD\ø6†yU‡ÌwVÀžpÛ Û "$þóC‘e];ì t)¸¡ „Äå@øtéšÛÆå`Ù‹f›SÁ Ûÿÿ€û~Op÷v58ÆY#(bû …çÉŒmÏ")-?gÚ|ç9cŽÎÙ’´y¢Iö÷:êÎíeÓ¶!#.ýïiãXʱŽeãH°ÝÐö¦ü–i×Ñ$¨x\ nH›“ôdm/¦ÎšÏºÝÍW‘¡î³³ÃÁÓ½ÿ|8rCS´¢'òùEðŸi~hí@D¤ÙÈÁA­±‚d(²/qRdbÐÙ’~­æ¦½f^`u:³Q•ÍnÙÁñôºcšLâù>yR_ôŒ;y,gÉžá9’¤'kÏQ6f®±E#ª»œ ´×Ç€ù-Ãcì4Q¸«î¼{16oDÏ` ×@”ýæ¦ñ°ùü\OÖ>a–Ÿs7ò‘$ž33k¢ágçSnÂãA4"ÌS™Šˆ /Òv¸ÏbBÅæ(íŠÜàívwgˆ„PLFÚž¶ßÝ‘íœ17þž1ºd¢=yRuWÜ÷UFN:MäÅFÛ¶¤è.³ÏPX¹ë«7" +ÍbËXv$"‚§R®é¡¤ö¼'h3ÌMxÊØ\OÖޒć:.»H€½ÉO$åÃl½æš6$boà±›¨•™Óf±MTHcâ£3rãsrªÙï‚›YÑ›-U ¤$Y93±`ÄéŽׯ¸Yçlp}\‹è*H5i"Eð„ËÏÔp=dÞ³,¼Ñ:1´Yv.ˆ(x1uØDn¨T­©RM­i¨F´wÒLD ‰m¨­$’Næ¼>&#¿W¥âœÍH'HÐN"(4›¬<¤AsÓÑ lÞ,kÅϱdy˜Í'ΕnhiÉß].Ê¡›êÖ:Ò@FûÂôdd[Õì7kˆÍÏ»–¬b ë¤n'¬º° ŽåTƵї,Gû®¦\KÁqÚ!¶3Áöö¹ýJ–§é7bˆm¬Â5‚@€ªèM–‡IΦ,ópäéÜ'ÛΜéde¾L’”爤E.ª¹¡Ù$mµ»ßݘ$éIÚõ ¤0I{‹¶ÀÅdeîSžýÚÏfÌñìO‰ùd‹FPmKÊgŠùÏbb ×´9«Rõ>³þ®¦ñH$k“ádåÌ·þàøzÌñTJÒÞî>Ûb„ùµ$ž¤@€ª°7À¬=—’òa1›à|ʼn[ççpd6ºt5YYÔ±ÚÚÑ$}¸g¶F¡’%’@°Ä¦ùo1ËV³ßN)J›æß‰Èùý^2:+´Ý¹`;³F]MVÖoŠýf×’ô¡Ì­=˜¤OÇ_JÊ#‘ár)É‹õ¥Œó´»Žë à&§’å¡’Þ BÊ/ç# ±‚‰ºAO&ñÄi; u,ò½-ìg«W÷›Ï÷FÚu) Gƒ›¹/Øæ÷l7۩Іp½³Á _“0g©šýz13™”çt]uQ±pØ­Û‰‘%IÚ´ß.{8Yùš¹@Ð… ›mÍXNí>ç–»hÚ:äDŸ4ç“•5µv‘´d®¯¬B‘§’ò—ç"Û­åzhy_ù`k6¸ }Ië‹þùãîj¶ó¾#ëU(Yç©«…ç©Ëí³'Ç2Õ´«3ÇvÖ$º!+J²3)ÏÓh[l’°·íœhgTÑZ‰¾Ê÷PrðN 4…ñ¢M`ÆF¹¤êcGÑ Ø†²%D@}Lé¦zç› ÷ ¿[çöì¾½HšàÒ¨S Ý7º£ðŠ·ÂÖ¹I$!H „!H  }`V13„!ÀÀ¬â¶´Ž#’H *øcf3óH©Õô­³Žõ{‹6±Þ69}îxŒá³émƒ°Õ›znðÓÂå EÛ_§[+èöŸÍºãü1†Ï¦Cbt¶ €Æ÷¯6P íKÊ“,wíPÑÝ÷7À9;C'œü1†Ï¦Cbt¶u™Ln¼~¥Ð@41u¥h׊ֵÎÏÝ4Ñ"ü1†Ï¦CÒ!él“³î|œtžf $1,3ì~špß'î{ý=è¾Ûãúç^÷w^v¢pÊÙÖ”!¿ž¢írËìOÊgÑl/Úœ¹–ÆÝçã)×Ö°ÙçÞÈùñÇês˜ö¹e÷¸vþŸ tHŒÎÖâ(Ș½Ö›,:ÝÔ’¹ñ§íwÌ}>åþpuQ( ^vŸéÿ[*´MËç-ºvøaÅ3HzØíCí¼`ö3í¾_HÊs«fÝça’¶yÌ´qÁEÏ–œø õ[îŠÙçbBâ7þŸÏ¦Cbt¶Õ½Ö$F“•3ÙFŠvÊ}"Ç~ÓRÁEa¼ Ùé>›©Ð6!Ûi>ë2m †/mÈ£n¹Cl¡@ÚoD]—Ùþ\°O¬KÁµ9é>?Âå‰?Æg£³­”fK.¢Ò]‡@š‹ì÷bR99zI!Û‚kbû{{dÈíŒvË+.»vuF†Ô–Üö챞ˆDÜ®™×(þŸ tHŒÎ¶ŽÒád9Çç„1ƒUì7kˆ-‰“¼³Ç:];Æ]$êLpMwwç<Ö44X!úsÎ §ÅŽÕ²€@Âã³ñÙtHŒÎ¶1R¸ ÿŠˆùdeÑÈjÒT©×µïš‰h-&åÉÖ±(P­)KôT³, ŒÏÆgÓ!1:ÛH‰‹*é»S«$ΘèÖx²<óm<¸&Nº¿ceº«HCf1ÎAÂcøl:$FgC u¹Rxþý~G‚å'(úRęشɷg424wÅl£’@êtè|dŸ=.’5‡@Âcøl:$Fg[¿i{¯UT@²Bà²jó³Ðl®ŽÞ w±©×­{.)O˜Þœ,W÷Þí>SîÐ’‹ðØã –ó•´7§$ágÎMBëdp"ðÇ>›‰ÑÙÖ¡@ZÈ]ÊS(Ò×:‰,Í»¨Ê53$ר!6ß¶óîÿ§Ü~üçv†Û.'’.;1s.Y.¨Ùi–ñ3óΧ$[{éB²ü*’p$ü1†Ï¦Cbt¶5ʘ»Þc•›÷f|gÙ–c¹ždy†[ùLQÕ2:î¶ãûßX°ÞXd›ÉÊÇ&‘!² 'RN92l¢C»ƒå‡Ý2§œˆÚÙæ.÷ÝÓŽ©È~w8axÚEžÂỌcÛËõŠ?ÆgÓè ðÇ>›‰ÑÙŒá³é ðÇ>›‰ÑÙŒá³é ðÇ>›‰ÑÙ ýñ#ŸŸ/|aþ¿ª²¿yô‹-oç;=²ªûÇgÓ!1:´‘?~òò÷ Õ"‘ÒêvþÙÌ¿­êþñÙtHŒÎ$>›‰! }ýñ/üÖ‡ oüí?/³ï|ï©›bäÑsÿ¹âû×ïý ŸM‡ÄèlÐ^þØF•fæ.äZç¡Ýï/¼åÀßÞöGŸ(Ù›ß}¼ðê_=\Õ:Ûï¯ ?ý+ïC £³à×·@ùõ?-œzü«…_¿¾b(îOÿ¨pì3Ÿ_!z´Ž¶[ç©<]ø“¿ûÜŠuÒ’DÖü·¾[¶Ï·þÁÇñÙ€@B þxu’DÌ—¾ödÅœ¥éÓ_([ç+ ‹× gªÅ’¶5÷åoÜü\‚k-‰#|6£³4Šî¢=œT~©0þx ¤}:U¹yÏG?[*ûµÃŸ*<ñõoßüNÛóëè;Ï·¾{¥$h”×ô‹¿s¬$¤¬ØÑçYéï?÷å²åóƒÿ€Ï‰Î’½îú¾R´}Eë¯]ôK¿û—…ß?þXá“ÿüDé_ûÝÛÿødYDÈ~ðcÿtó3E’FãƒeëúÄlá˜)G1ôOÍ•m_b Ÿ «Þ!}B]^ /ôÕ0›¸30èlùý•±KîZïįí$mo<8šýH6‚ä#?<@òÏi‰ÝV )Ze©¶ølhZ‡¬]Øk©Ã«"+ `íú«í÷ÝUxY÷íV(](Ú6üñÚHÊRôF¹my€¼yK?:ËŠ …|ïêK‰ßøl@ !èl°¡ýÕ»_~oá‹oxiá_ÑWè¿ãÖgŒPš-Ú(þxm$E‹¾öäå2ÿô–JùGi$?óLIØš±–%zÒrì¾Ò’ºñÙ°êIÓ++½·g-=±¨ÓËÖb" `¥@’ýûØ`áw^öüÂón»Å ¥™¢ áWW Ù$i%\+ïÈ‘i¸,M yÓ²»ÞûÉR)‰*+xBÑ $ 0ùr;T§œ(|6¬¤N@ò4  YÉÛ¿þüƒ…·¿øîBwç&/”–Š6]´>Òê$»œf°ÙïÂ\#ÿ¹ŠAN~ä3%Q¤\¥0²d‡ß4…?&µòu’ô[Ÿ ëN )dª ]¦Ù úÌOïÔ…® üß>›™Hý¦ýÓeË«¨˜ÆŸcÛ¶3'üwê ±öøH—¾ÿØcÿ±bûiíñíü«ß,­£§*=Ù¤U„]gmÜí ÃZa3iÉÛ£?÷’Â[îï)lêH¼PºV´ƒÉú( °a’ŠEÚBa­#ÿzÓúzÕIZ~’­Ÿ”V(R»-Šb!`] $vUçÐSGZ5U+d¼i*ily%j[±ºY9H¶=êtR±í_þþÿ”„PÞöuôFϘkqgÛ‘”Ï(°–Ø‘WÞ—*¼ÍŒ þï½wÚõÖCi€ ;Ä&|îQ,·ÈçiÌÎBÓ°š|¥ü¯ü¬MÖ¶>7ëU#gv¨¯§$RÕIÇG`bFr¬ ñãÍ>©/œ¡ˆŒ]WcÕáôP­ç;³¿®E ùNë;lØu|»M…‰C‘Ž£k;ì¬-îl¥kg¬·»4´a­0%fWGÖ>þÐ…×>÷Yë¥4À†HzüÆ·ÿ{…’o–ÿ·¯‘¿ôë©8dÖŒ7 ¥°"v–@’ø²~Wµ”H°ê©¡P ÷Ô¹|ˆ5V6Þ®kC¶ê\öéB"k¿y’/\æ£>J ´ãÛz*²É…šeáQ¡4/„ô„dŸ„YÀl5RÖp†­ûÈ«úUÀ&r¯ÅÒëB )E@~RVéÕò…ïú«, )ÿÀì}³êÏùí„£ ZOûQ"¶_W© R±ZHQðÛŠ £éÚæ6!`] ¤pFY˜ØçÅ“¦“ZìÓˆ©V¯@ ;±:~¥bgzr ;´laä „aÍ3E î»ãÖëÉÚ, À› ˜Xí"4Á«ö˜…($a1°´ïÃÏcù@Jî®W …ÉØiSUm¸W‘¤ð¸u^l´ „a­3SÀ ¥µP„@’´+'i×#bÉÏ&õ ¤JíõŸ+\ $ k½ùÒϺeÓ“µQ„@R>MÕA²//l¶@ÒX¹M(ôcèi†@°ճŒÒ$ „@Z“)o„I¹H­ÈAÊÛ^ûFêF¡!0¬yöÉ×¾¨p÷m·Øoã$ „@Z×If§“†Æ5‹-o{5£"«8™êr¨Çc_º¸¢Â, ÃZoïÝü‚ÂýwÜjÅÑɤõ5“H$`[\¬Ô#ºCš=¦iø±:ÍH2Õi²5”$Ò´¼ª¿ZH¶ºQ£ N’¦ÿ‘ƒ„!`Ã$™jf¤ «©6[ ©m¶ÞQŒFNñG aX~{lô%…í÷ÝeóŽ.mw²º$H$ØÈIb£Sn_W…Ãìwá¶Ãïí0š$i›ú^BHE5«-«Òª_^fKTÛžX%Y%aÛÒù^°)i¼Ñ%ïH–oŠðRÛÃÉÚxW Y ­–é%µ*™ö½HákJšmBª ­ˆSl† Úoz‡Û‹žu›­¦}:YýÚG$ Ô\S´Èçûè?¶D€ŠÞƒæÑP­1×Λ^pWI$aX+ì/^Ý_µ0Ò‹k_w÷³mžÑÅU˜¡†@ÂH¤Õ±ðÍÑzoš¢U“èkO^.û®Òûƒèl¹Øðfy¬ÅvKGGn‘4ûº o}à¹6ÏèJÑö%kóEµ5ûcù¹jSbé Ùô„fFïH¤5mioŽ‘3ái¤¡"i ÃZd³y†u•gô®¡{ =·Þ²dòŒŽ­w#úãj+÷Ç&¸lTÓèÞ™é_ù„φ¶ éê)áOþîs¥$l?3MÓý5ü¦YlÕÖeB ¬=•%]zðÙ·ý()íðFöǤt³åVHÀ˜7ãÙt6h;ôéŸ(ü?Ïï^2Âh¡h;ÚÁ[›MÛΖVŸ $À†H¿~°°ëEÏ»ÞÙÑá_8{µhû“ÖWÁ^—I‘$ká÷šl¿ßé_õ¤ò( ïøÀLælbkzã€Ê²øõb/÷é~ÿ%ÈóýC»ß_ú[¯zò¨X¯>KkŸ=Ž}:•™³dÏ‹ßç®÷~²´î›ß}Ÿ@ÂH«'üdŸòŒ~h¢FÇ‹Ößnþ¸d è [N7~+04ÙÅ $‰MŽùñõëeÛÐßú\ß§½R*-_ô‘ÏϯXO)•k ·û^B%ðáBzíS m[B.ëœkŸ*úkùµÃŸÂg#0@kýÕ[îï)¼´û¶«F-ÚH»úc{³–  £B¡Y¢ÿÛa(±õ‰ûê&+$”ì Àcèû°®„C(¨*­×L¤hOXÈ7D߇3ŸCQ¾æ*Mâ³H  iþÊØâ»Ö[’¤æâèæoE‹¶§H’åàÇþ)µð®„fŠI|ihʾbÉ®'Ñ ÙdE‘´E”ôfÛm¯$1§ÿ?õƒ§o~§(‘>ÓpžyVÜè8ô½†Ê4gÛªíØ!Àðœk]½µA¢RÑ3|6 C ´’Iw}_+Ú¢uãëH²úlÙ2VXÌ}ù+–·â!O ±÷^JôØ(KX‹IÃ}ªa'R@Ê“¤F™ÂYÎzûÚéÑ9J;ç^tùÒøl†@h%½îÚÄÇ’nè Yöö?>­dß6à‘ “§õZ'‹„„ý^‘‹_ß¶3|¦ÏyŠ_³’Í;е',@¬¬Ø9×9ÂgÓ!1¬aTÏ4 ‘…ù86iÛ&YWƒÌX!&JgY³’}ÃBÚë§~ÿøceÉç±s.a‰Ï¦Cb$Ø é-þvEu,ŸF‰ÖM ûÌç£û~ÏG?[Q ¥EŸðÙ$ ë\ )Ú¿ð;Äæ×x!B«dvV\,ê¢éöŠXi/MÅ^eKT#þ副ßüNÿµYïò´Iå$:$†@€6HšegdÙèŠrllBµÄ”M^V±Ç0GIBÆMôIËápU˜¤­(N,ßÇ $‰¬0oɶ¥¤™s6w+, ©mÛyvD‡ÄH°Ž’Äþ®d¶®u&$d4MÞ·…3Ùì>%"|õh‰ ›ìmg±)ÒdÅŒf«©Ê}ÒP–ýNâ%–(­ãóû’À²ïZ«$$®$Þü±«=6çJË*‚¥ö(‘}þ[ßMP$:$†@€u$ªê/‘`§ìKÜøˆè„Óù%NltÅ %+r$°Âs6§' ‰*[h1p^(Å 5†)PBŸÅJdFÉHtH X Ù<‰;e_Óõm~¾·ùAZ6í•!ª£æ.Y‘”V½Z-öN8 Ù…¨=òwŸ++Z $® Ïm.‘`iùW}±×† è Ö¸?–PÑL±jL‘#Eiìgar´ÙeÂ<E›TyZCbÊcR1E‰’J¯ÚÐPœDš¢TJ„–Ð _ç›eç÷£h–Ïa’ ò틉+ úõ´m'Vʇ†ô$´$Œb/ç Ïyì]møl†@ü1†Ï¦Cbt6Àcøl:$Fgü1†Ï¦Cbt6Àcøl CÒÙŒá³Igü1†Ï:$ ðÇ>èt6:þÃg£³à1|6Ð!1:þÃg£³à1|6Ð!1:þÃg£³àñÇøl Cbt6Àã³ñÙtHŒÎøc Ÿ½~:äs_ö3¥N‰­o»óÍt6†@‚°Ûý8ØÆ²q.m†@‚úEÒ¶al—4 C  0 C ´‰@bÒ k`&Í0±RD“M˜Xk“®äF\'§ ¥È÷NÔh£œ>€&òœÎ[N'7f äl´ŒI}³»–IÍcÌt¶kEëã”´„Rm§±ÞîÂÛ_|wUöšž;¨ÐD:︥ãÿU'»û¶[|g›æ´´N ½ûå÷¾ø†—VeI$€æQ*¼vÿ·>ùÚ6u$Ï$7B¶Cœ@;ÒsÛ¦Ž§ÔÁÞ»ù¥÷–û{|‡›áô ÚŽM›6}@ëµÏ}ÖÍ÷èϽ¤ðìÎM×]§#ñÐV mêH®ßÒÑQZKét³œ&@ÛpkGÇcêXÛï»kE§ûן°ð¼ÛnyÆu¼mœ-@; ¢d…;;7}I´ãýÎËžï;Þ…„â‘õ B¼;’x¡Ç™¤c)ÛÞ†ÿÈOç­ßT§’Jëxÿ>6Xè¿ãÖgx:¨›=I…‚G^y_Õé?qOžB’cœ~€|ìS§xÖm%”Õùþð}¾ƒ]rO@P=Ã^°(­!,ø(_[­8²")ÜžŠNj_IòtBá_€\ônêHžªæiåeÝ·{‘´¯AmLnD¤º°-¶ÖëQTÜrÑFÖÁïдß;”t¼ãtAhcJCi*¥R«Êk?õœ.ï·§êðÓ;M¿mÄòfw9Z´ÃÉ|ý«ü›Œ&!YK>}»3¨S^w÷³sw´¼ªßw´+Eëi€ 9ï¶7Ѐ‹ñJ)Ÿ^ãOMIíË»œÓÛ±†ïaç-ÌŠ„vGQ¤¥Û6u”Ê©4KùÈG’|§†ÑNŸl±Ž›Ú6ý{ä¬âo2íÚÑÖÒ[(œAÞÎYì0?Ö´þ™‘ª:œê$%y‘íaÓ1êHûÝ6λ§Ÿ„Øoöq>Y»É‰õ¤±díç…-DÄÐTB.@é†üË}ÏiŠ8züõƒ¥·"8±«†öpë1âjÌ ¤+5>xz­ÚM\çêhëZHòqkeÖ7©Šâè_t!¼õçVÝé>þÐx‘íÃngëH›ÝSÇ|ÆÓ‘¿è÷ ÖŒ@€CWKµ<¨æ±ßì­wöñe·nȷݽ5lsÞm7Öž~w_9·ÒZóÁ¤œ”.n½ŒvöuÖÔñþï½wÖó"ÛÞäF¢÷1§²ëHGÝúÛ+EÛ9§ŠvÑuÈ®m–CîiF¹L'ÝEvÆ <1îþ^pß›uGÝùÑþ:GsÞm³7‡@pOnçS9ž”çTééä´[wέß9¾'D÷'ùCì#n{çÀ‰­ßé„ç³Ü÷y¯ÛÆU÷Ä9mž¨ôÿÉ Ç{2Y9ãf›[·Ûµçœ97ƒtm ŠtÃäÛUºÅù‡Zòýº]Ü•âjÍiZÈHbŸëÛ–.ç/æÜú§#…Cî\¹6Ï¿<ÙϨûî¢ó[[«H£îž2ïüúñdåãtäܘ}Î%ËyJÖw[wÈùà î^7ñ¿ÎÏÏ;;¹×!rÒµ©#ù–.‚w Ý[sçÓÓŽžz’Ú^d;ã~¬î¤y׆j“ý¸,'Ýv.mYpû¸æ.“íd ".»hJH—;g39Yöªñ7^ÅJÕœÉ!ühÁöÀ÷õD|ÜtðЮ<™"N¹¿L!´-‰ç/DôHÐÅXxã=w6,Œ[Å‹l;]4 œMV¯@ºZßu±ìMÊ«ÍÆnî)Î'M ņ }”¥?²½]Iù°™5ß™Žd¤‹ÉòPUhsžï^×ù|’s‘'+®yÎw–@ò‘¨­)×&˜Jä$Xçhˆ¤ª²+1ÓÃoÒœ¢£F5º¾Ïó½~(ßß'|4äbŠ?³þ,I"[´/I¯(~0‡@3¾ÐåíŠÄÖ—,Ï’ê¯ðD2\c‘e/¸‹¼«é@Ê“â\Êö“å$½Øo¹¬LL´bêhÆñÍ9ñÓYÁ9ÅÄYXõüxÊoçÇø÷˜sx.C eïîÀ¡"€(RsŠBÆØa.!¸úÝX˜ m9 t©F4k²Ã‡\†ØrÒíÄCáÀOö5½œ}Rý‹lÓ.†I÷ç çú›ºTó>w 8á‡àN7S/ª¼ˆè5ø”ŽU«@º–,ÏÀè3‚m¯ýlq¾öl²òmyÒÑdeäa#òH9/FK7û…ˆ5¾È6íb˜N²óRbO:ál_§éHD´ån—=8—F$_cÃŽUï®°½n#¬]JÊ$»’òqð‘Œã‹•6Èr×ÜñûÒ>Ll‡Â¶'+ß¶q°ÿ˜@J;Þp¨?%¼ð¾Ÿza«ŠBfE²l*ð•y‡§&“ø»3—ÜÃX±y‘ÖîÉð¿Y)Ìßñ~ëxŽcè „ÍÚœ!:ݽÈÞ f’ôYl•R¿9ç—“åZL‡Üvíp) ¿¨T³èÝ/¿·Ìê™êä•÷­ØÞöûîòO0ßOòì0‘ËP’ÌÿÏÇ?ÝlOâSù­#wËîHâCt#‘(V—kÛPä‰kÌDw&ŒÈ0ûé͹=ÿVz‹¶?Ž­Á¶íñíLª«4=lÎãæ íìvçzÂ=v¥lo»9—c)ÑA{¼£U\/]æ<lˆ(҃ϾíG«X²Ïõ©,|åXÛïrQë¯û*“í¾=ô¿•üxâ"9~{]Æ¿äaÄ´},òà™æÓ»Ý~û‚Òî >®'rλœß÷÷–>#žìýs$‰—nik&²Ôÿ¦Žäz-"É §eÙÜþ2Px(•ä¿]ŢІ]„j"òP‘èÎê …­PæTÄJ ^ŠüÔQûh&eÛ{“ÆÍ @ @Ûú¾Û;ŸÎzëA‹BBãètIÊ»Ý=A‘_˜’7¬1¦ &8$hÚMõb–ŸnbQHhÆ©h¾HšªÑvpúZJóÛLë€uŠro&° c£\Òøa _ õ¡Ü›¶¡l‰Ž€ÆðÕP¥éñw>°¹pÏð±unÏî{’øa _ ê˜÷î(¼âm‡°unêxt:ü0†¯:&F§Àcøj cbt:ü0†¯:&F§Àcøj cbt:ü0†¯:&F§Àã‡ñÕ@ÇÄèt€ÆWã«é˜ðþšŽ‰Ñé?Œá«é˜ðþšŽ‰Ñé?Œá«é˜ðþšŽ‰Ñé?Œá«é˜ðþè˜t:Àcøj cÒé?Œá«ŽI§ü0†¯:&ŽN€ÆðÕ@ÇÄètøa _ tLŒN€ÆðÕ@Ç´öÆßþóÂ;=rÓÞ´ºâ:¿ùÁ(['´w|`¦°ý÷þ:u}»lÖrt:hµ–OzÏG?[øÀ§æ û>tªðú½jªŸyë|ü¦?Ôÿó¬ó‹¿s,ÓËÞöGŸ(ŒüúŸ"T«ÍÌ](X¾0ÿ_×yòò÷ yøÖw¯”:i¸¾åÏfþN«î‡%:žøú·Wø±_¿^˜>ý…ÂOÿÊûšîƒõÿ<ëHåAmÿä??Qxõ¯n¹ÿT}æóøjXŸé¡Ýï/<ý£¥ê—~÷/"|}ó»#`ÍúaEZ¾ó½§2}ÙÇûu'<}ébËü¦ü½šy_MÇ\s¦0r =-åHóßúnIäxÓÓJ( ¾²°ˆ@€5ë‡5œæùÁÓ?*¼ë¯þ±°ë½Ÿ,üË_¯êáqµÒß<úÅ›>XQµ[§–_;ü©–J `Ý $ ë<ß»úÃ̬@± È”ݶ°ãø$XK~Xz‰#ÿ¹üàS?xúæw&kQ )—4\FùS=¼"”ÔŒmQâµE«V$““±Ø\$¬%?<ú,ù¨ß?þØŠÄæËßÿŸ›þjò#Ÿ¹ù¹¢Iò2EÝõ`øè¹ÿ,=l*ÿòÐ'fo.«ü¥ƒû§Òƒ£Ò´Í¿ÿÜ—K¦YIf‡ ÃüRµ÷Oþîs¥á0ŒúWmTD-<öxµžz5t§ãùÆ·ÿ»äËõŽÝæ¡úóãÕ[,§ßÁ.“w¾ÔPÓxºçkO^.}ö¥¯=™+Y;@’£±¼åÀß"ZC_Ñvm_Ѷ­«AÛívçápѦ‹v°h¯“s¢vn`¿Ó9Þqüpý~ØFa$l$\‚Ê#ŸøøW¿¹bØËG¡äOc(Zo£WHÚ¾gîËß(GòûYHôX‘dW÷Š0¡=œðcÑù‘H´‚íß>›ù`-¡™7¹„@j˜é¢³GO±¨OÚx{%ô ¿õá²§…ÛŽ†@jª¸æÚ|Ùý{©h#unwwÑ®¸í]sÛôçæ¬ek™‚uÕÒå„àÎ5|lBg"Ç;‹®ÝËÚ4ù0EÙí2V0Äð…Šž„ÂC~S‚%¤‘I~=-·Ô¶IÂï‘ÏÏ—ö 9=èÆŽ7Ìoò£º7Øt ý_ŸIPiʲ¢)l³='Õ Ï!H ³plZ‚&6«--YÛ $B•zü«¥ŽÎŒSØ™$í¦3\´¥@°l)Ú¢4]õ\ÓE;DŒzxÐw©ZKilüþ ÎÂßl?\»¶Ñt¡{8Í?HŠŽ(uA˽ýOF}ª’§ív‰2µ $ùXû€ê®R>’"H1ág#aL;^íC¥ôÀ­Úwyr´¼ÅÖ} ÏU¬D  Ôt³O ê ö;=ITJÖ®fš¿Æ´½C 5•£®­ÃÁçûÜçµ ¹xÑ5ï†Ø²ö»´f~¸N?¬á#ùI QDI92i‚Áæy“Pª•·b¦YÓücÃY~4!̲ɶ'<^›Ì^M’¶Ÿ~R¦ü.+6É¥c®JålK8+#ìбdí<IOê±*´¤¦ÐŸÜÈ9 ÙãŽa[ Û<æÖÝ^a¿Ó‘}O¸¡Ÿa:ä¢N}¶+¹‘tÊ-{ÖlKûsŸŸv1ϨÛï@r#/jÞÙÑÈ_L ¸eçÝöO&åyJÛÜ> ® ÓAûíñwÇÒ“ó¼j?Ç;ÓÖïtÂÓŸƒsNðvº¶¨MWM›ßXÿŸ ¶5ï©Èo¶Í­Ûí¢ƒܲ'ж¹ý°RìP›†‡‚!ñ°i š Û‡}(m´@Òp^Ö„EtÅR¤ÈæBåHio]¨$ì9±BH ß• ÔT³cÀyðcÇiIcÆê8Ö|Ø5­ ¤–!pÑ ³u×°þ¼‹ U;|vÜ Ëp7Ýk®ƒAôcÞ}7ã–»ìö)ÑsÅÝœO¸Ï®‘2áöá…Ѥ[çªÛO†@qËi{G\/¸åvçHÓæó)'$¯¹vôV87fXò Û÷\²2o¨Ó|~Êíç”ûûDnkÔïQ·ßy·ÜþÈpê¬ûm¦ÝïâÏ}_;úa Y|>e(by@òoY97šæ/d}°Š5f½"EßÅr 4äfËd ¤´ÄðJÇ£¡4™Ó0[8¼¦{ ÔR gä%|RÈ3‹-ËHMg‡‹ÄèÆ¦ü£¡:†¦ª¾ÙîÖ;‰š,7î…H„jÅÊDÆo3ç"&14âþ>YÏ‹³=é¤û,6œ6ì7‰ ™C…aôfÀlëJ²r ¿–ìÏ)Æ2Žw‹ûîHІpHuÈ}>µý°r$íƒc(’Âq>§2@’è°ÄêúÔâOëHŠ.e­›W ÕS(ÒnKçÞ¯iÈ„@j¹Ù‹°RœMäVÈÕ™!ÖXª5IùŠ‹@UÃÅ Qå£"[ÍÍ}6E˜MçH±Ü˜cذ۳‘§ÙÀÎÑ¡˜@ºè¢L³[ ö›F¯‹„íw¢ñ|²\6Á¯{Ù ½JTH»ƒsžd,ëÒXd¨và ¤° ¢ü¤Žh8Íææ(§§š!§°·†µlä%œŠß ¶ÛÅ)‚dÏE½)–¢KN·Q6›@B µÄÔ líŠJoY¶!Ï0™´î˜¯ÙHc6#dÙknªY³ª&‚a²F¤˜9œ!v¹¿O»Ïb¶+C -$Ë9?i–u¾ö$˵ª]tO9A'‚6/%ùjUHûSD)ã½”òqzH´õâüªÍɱïy³¥QbÅ#[!‰:nS“jIZT2 ZÖ}’HÒ1W:/±’0$RK,|R©TÂ=|ò±ã¤5ÉžŒ˜ŒTËd0äÿœuŸs‘§’•ù3õ ¤Øp•ëŽlo‡û{WŽã ¤ó.ºS ›“åäõ ¢N‘ª ‘mt¹¨àPN”u¼=í.dÊ…ÉB¯ ɊĤ‰EŒtã!k÷Ûª$ÍZKÃ&oÛ‘†¼)Låð„‘! U†ØzJ$RËL…ÁôÄ"SAǼo¸öëÈ|(VI…þ3_…»Ú:LÞšñòÇ6H>÷g0ø¼ßE,ÎÖ°Í'v.'ñDïNeòçÈ,{Áµ¥»é@J›çR¶×Ÿ™pǺ3Øçžˆ‰ßywŒ"h»#ßÒñ”ß3Ì“ª$úMÄ,Ä¿íE -ßÜå—üŒ*Ͷқ-hèM™1ÿ˜&’l„FÂH~XÂF¥Uªõ§vJûNkê6ùa.¯êi»z8Öÿý¶}í¦ðx³¶¯mûiC-:³5—­KKüF !0R=øYT猘vëó‡ƒÈÐl²²¨dŒmÉò4ï.b£›öv·mð7ä+nøhÜ|æÜþ`x¨^´ä¢#n?snÏÏó5“´îV#{ƒˆÏ·ÿn'|)­ÉrM"¿ÍƒçÒ'ˆÏ%ËIÞ}ÉrÙ]ÛìŽí‚ù¶$Ë3ðüú>!}›Sá4{¼½®Í;Üz—̶Ú^ a­UmG#ÂÉ@$†@j$Éò;Ó¼]I–s~<ÓIv^JÈVwc.¦hСHÄd$²üµÈ ¶é¸~?W#ÃIáöÔÞc‘㹘”¿·®3)O Ë8>_Z ³Â¹¯†õÙ1ULKGó ·'û70ctºäÆ+Gn¾SìeÝ·>4|_á‹oxiáí/¾Çøáœ¦HK^$’HøjhAÇüð§Ï– …w%‚t‘+²43w¡ìb?õøW9t:QœõÂèþ;n-øÉ¾’0ò†@üp>{h÷û Oÿh©ìýa»ÞûÉ’–íûЩ±õ¾Bƒ!63‡©3zþæÑ/F—ùÒמ¼¹ÌS?x‘Ó¾n°h3^=ï¶[žÙ÷Òçþ}l°L!?œßa±H`„ËH0Y~ìŸð¥øjh¥@’Š%c«û§ß?þXtüü=ýlá+ ‹eBJѦØÜÛÿøäÍDE¬Þrào ßøö—ÖÓÛ˜µ²‰â£¿ñÁÛø…ßúpÙ2ŠvÙö¨_{òrMíQ®•ÏÉÒ“›žàÚ¼ÓiÊþ±¢-©Ýwܲé  Ù×=¸BEÒ1wœÖ ]ïé×*uN¦÷ŒYg¼åãäë<òÉò…±¼¥G>?ÓïÉŸj‚Žd=þÕo–%ŠúÄl´Mj‹_Æ?`+m#Í7Û¼×GÏýgY[•çªIAjKl_òÝZG#öøô¢Úfåe!Ú\ ûÌçW¼YÃn y.:-£‹6+o)ìðï<öÈÍïõbÁ0|¬Ys¶H|…û•±Ë·5O{ÂdsÛ9‰4Ë;>0Ó®®lÊþ¦Žä™7÷÷ý¹—¤ #oïü‰{ìL6 k•-5Q$5Å+bd‡Ø¼_”_ÊûÂUù4ë3Cä×Bþäåï—å5Yæ¿õݲ¼(ëc½éaÚ ï§+å 鸲’Ñu?duÉ:>ùüfˆ$R› $©u{‘‡bbîËß()ôXØ7Xºèõ£ŽõÄ׿]½±Ý »/-§Õñô$áѶÂýúˆ“ÐþÒÚ£h”¾£[yÛ#±Ôèë Ó­˜²¯Yi3#…Q(’I°V˜fO6¹_5Ík[òuŠÎ(Òž6ÑÆúpE¿õ«ˆòiÉ !¡Èî VÈ(Òn×WdÝúKåRUHŠÙmj:6ùû02dïQö;|VXÚû Ô0ÓÓ‡ïièBTÔ&+¹ÐFz¤æõÔâÑ&H$¼ñµ>Þüîã©I‰rVù¬†âl{$ìÒÚc;SØ=½©=Z'ÞÀ.:eÿã=P•0°հä½5ušʼn¡‡Å0ºb'ÚHy¡ßÉ7Úï¬@’Už“\>­A¾0m‚Žü¶G'Ï,6ÝW „mõëv=+Ãuln–¸F !nºè”øg#-1ìSˆ KÚΆ8µ!Û4A’öddÛb–.©ÇڣζÇ~oŸPÂöHœµaâ_8eÿ?eÃH­ñÃò=aQlÊÎ ³¹–ŠüdùQ›.`’ÄN¬=Ö‡[F—¬ÏÌH6Ëg•/Ö»±°5úb¹PöøÃ( Ô”ÚH ŸJˆ„‘%u ß9Õ­ Ñp–5û„¡õÒIÚØ±"R6÷¶ùMv̼R{ÔÙÓÚÓf3#ʦì÷ßqë3á”} C µÖË'êÁQQ{E¾Ãœ[ É~§‰6¡ß³9žV@Yd@ÃvØ{€ÊÛPû°™%j­d©t|iÉä$RÍQnZ•l}o…bZMq3;}Õ  ™¬¨–}âP8ÕŽ{‡‰ƒÕ¶'&ÔÙÚD …Sö¯§MÙÇ0Rsý°†´”^V3H)6’bË­Tƒ-åbR–°°~U%ŒÚ„“h)t^ª¡ÑyH¤6HVlT*>f…Š¿­pR2³„F–ùð¬$ê¤YmT"žGÿ·cØag°9O{b©R{6@§ëNÌ”ý®[:®Wš²a¤æùa;á$«¯DLìÏFYÊòyöAØ $ëc# v6¯É&gWH–«TÎ –CdÑ= ëø3Š@jc^È6Ñ.|z±ø§ gÛ!«ðâV‡ÐSQ8„V Ñú6bä…šMÎNËAjF{6@§Ûí£FϺ­ðÈÿy17W ´Š~ؾ­@~4-Š$ñ‹¼+¿3+‚¢¡ºX-¹¼)L®¶‚.vÏÈŠÙôˆpVLéáVõ“¼€²‘³XŽ•D_ìøH¤†Šš1à;¨†’ÞÂÙú™fá,6{ÁK„ø Ü~ú­‚Ä– ˆ%g§Õ±Î"lÃo3ÔksŽôþ´¼ªŸ,†@Z%?Š”¿•ßõxòÇa –Ø,6‰ ûà¨íØïl2u5Én§Ò„–,dgêIäÙˆêßÅ&ïØc×C²-ÑbÛ¥ôoB !6k¢Ò¤³ÞÅæýH„H(…¢ÆÎ.¨VÄê¥U›µÎ¢YíÙ n{b¦ó«Îѧv€-†@Z?¬ªÕyÑC ŠžØâ¶ŠÒ(²Ö ëÉU#bu“Âäì<Iß…µ›Óä [¦ÕyÒ±êØ$¶ì¶ô°O ÔpÓPYÖ´R:q8ެȌ ýÆßT­ g®ÅªºÚö¨„~ƒõ¶gu:„Ü_´«jÏm›:žÙ5ð¼Â¿þ<ùH©•~X~ÕÖJC~0öp¨a¦ð a±É°*wµÉÎ\K{ÃAždlµ5ë~#±¶µ–uH¤†Ní×ø®”»:Ž„„B–ºð4F^©¾„D–¾ k}uH §Øì8}æ§h*|š§}êŒ~…bóˆ¾X{b!áZÚ³Á:ÞµvÜG“ž{çu¦úc¤Öûa‰ù!Ÿ‹£h‘ü—ü²J–ÐwŠ +"£udŠ k½ðÁÖ]y¿—6‹9ܾ^Û¦hÙåbï¿Ô¶ÙWþ”o«ŽQµ‘ÒŠ=¦Ÿx] „@ÂxCtˆj"õBéÏéº>ýšû¹ùb$ü0¾H®È΢-z¡ôË}ÏÉõ‚ÚÐŽ¼ò¾Â»_~/†µÄÆz»H „Ñ階ê%(Ú5µõöM?þõ—Ü]xüõù Iþá+úx³<¶Z6ŽÆH$ŒN×lŠvÒß|^ÐÕù¿ïÝü‚j†;fÜõ‹a­°øa „@Âèt­d,1/³}å]wüï'_û¢ÕÌÀcøj cÒéÖÉJÜ—ÕþMÉõ7½ð9O?6úà‡1|5Ð1±¶ït=E;”¸÷¹Ý¾©ãé}ƒÏ¿n_t‹@ü0†¯:&Ö®n¨h§ý°Û½·w>¥™k$Àcøj cbtº³†æ½P¾ëŽ+* €cü0†¯:&ÖîNùIûŠv%)Ÿrcü0†¯:&Öö®·hG—Ÿ”4wÚ5~ÃWÓ1é˜tºuŰ;¾.º/à‡1|5Ð11:~ÃW£Óà‡1|5Ð11:~ÃW£Ó~_ tLŒNøa|5¾šŽ‰Ñé?Œá«é˜ðþšŽ‰Ñé?Œá«é˜û²Ÿ)ùblãÛlÆWÓ11:T`wRþžA¬}lœËŸŽ‰Ñé ÛOame¼O’މÑéÖ<ÿ?»¡wvèB€IEND®B`‚mistral-6.0.0/doc/source/img/Mistral_cron_trigger.png0000666000175100017510000000643113245513261022732 0ustar zuulzuul00000000000000‰PNG  IHDR¶ÛÃ_ª àIDATxÚíÝOh$iÇñwǰf%£Q‚¶Ò FÉ!‡9sh!B„‚ ؇r˜Ã‚Ìa„æ! #æ$‡€9„5b„ÁÍ!H”YŒds$J„Á˜CAf˜ló–yú·Þz»»ººªúû‡evªëoÏû«÷­?­@^Mõã jì @}.¨kšcWòîßž¡& ‚ ‚ ‚ @°l‚ ‚ ‚ €"ÛïJ@Y‚í=A@Z~ÓB˜Ùê¯-.ÿ#Ç<'8<?I)Ìâê·žë±ç1¯÷9\€8ÿép 53Tù§&æq‹C.»h®€û‹âîK@ Þk2<¾Ñäüoõ¿&æÿÝ >l1?æp@o{í¯S^æ?Tw†5%—ßïðò‡:lßæÐ¡&ëׯËè±:j­º­??ÙÁ$¡ðµ6æù%c^´1¯aB Ðn¨ub¾ìUj€ŽB–ó%Ô„¬çM¨@[Ë º1B è­ý¿¾V°`Ëj€†ZQCç÷„ « èærf8Ì@¨±,@¡¼Sâ°yeYÖ79ä@oå¶œ,ï5Á½ãK£ÿn {PæòþË¡zk,P¸`{UÒù9Á½leí%Ú–ûS?”K·nª¸î±å‚ @À$ù#Á½l(y°}`€Þ ¶Ï–<ØÁ½lÝZnÙƒ­Ôàöƒ: ê¥þoXêÍ»Î5ÀW²ÐF‚ÚÔ5ÒÆ|†ƒZÕß—ðû±ÔJP£úïô2VÙåÁf[öí//l„.TüoÝÉzÄW²Ð¦Ä±œjcqß—zšMýçv9@°Ù–ýù.kÖXÖ¾î¹Õƒº§ƒ,<?Õ?æ˜W_מ¶çúóWAmµ¬{fOÅ÷ƒ`rlæ²ûº¸ÍŸîಎE#5›0íDÌÿŸj;¨ó}o¡Õ5½}Sü3J-تž½w‚ h¢‘¿E˜§fD,c­ùlªüÝè’FÏ„`s~š`ÒiäUòíýj†Á&©:ÁF°y¸ëùy‚ h"Ø^•|{·3 ¶I±ŒåɰöŒÆNÖ`ÌgÃ!­¨á5½Ýèw,gEÿ9œ÷¢^÷ûº:¦§¹/Öé¾±N¾wŽˆÏ x싉„i=æ“¥ Öu-yÌ·_ÏS^ÿ¬éýòPïçf‚mX¬g´¯'ôŸWûµÒF°UôwaE6ætÝ…;®—9é¹Ï“ŽyUly°]³½© ÿÑ_éeœ©æoÿö¹‹rÊXÞ²nìâ¦?ղͲá ½Îò³áÍ.ë´é¹} â3 ž'wÓ-êi.,Þ&äXçCÇñ¹#B¤Ï8ÑëYÁVûõXˉÇ~­·láú®Šï¡Yçú¸š6Ä4®ïí=ÍKå¾N¾¥§{J“ ‚­øÛ»i„ÊLƒmÒb‡º19ÿÿB÷â‚íP7TÑM/ÑgÇS¶0ˆ/ÕÍÝ¢qd#»â˜n_O³e 5yû|(;º^ûkØl×F? ¨5Ï`ëëxiô;l;âó—zù›–7oT™V~7±¼ÓÍ8Âõ\ôBŽûTÛv‡—7hüãë‰rßÖï Ç$Ûº÷Sµ4.«e£¼+ÎÂ+ƹϛ] C–¿ï7Bé4¦g0 BÒ¼ûôH,c!¦G#{n®`»Öë\ëWñÜ/Tò5׺JïÛ¢j|ÌÄÜ¿êæ1“+ã;Ùgô,mÆ-ßk’Uš\t«±ÿ{I·ó{ª{oÙ·,{Ï3Ò¼yäD4ή`[J˜OZÁ&o–XpüýŽ®9Ëtó"åu£Ïž‡ìÙL:‚íYÂ[Ü~™P7îmšÁv"NŒ¬%Ì#º®{H ‚­w¶o@5^ßÙh3Ø&u#w{wT;9 6yƒˆ¼½3ñž¥Ç)o/ß]Çq÷¹Só å`3‡›}^×`“ë\·|ŸÌáÈ!ñýÇðÜòÝŠ†9ÒÄ¢¾Xâp³ µ¾Ÿ£õëW7×`.Ú¶#ÐŽôg—E]8ún[h˲½0yýq^½}{ù±²C*±?vºl›Fç@¹ïV°ÉuÏ[þÞ޼§Þ~~O©››I¢;\+Êÿš-@¯­„Ûµ¦â¯EøÛ­cq&m:Éi°ÉÆõ®X—mËIÀ™Ø†ªŠ†”Û{ä±q×ãZ ¶)£GÖzFÁ&-OXæbÂ2W€Šnè1o‘×ÓÂyÎÅ ù°XÙÂͶ=¿Èy° ¶ls gÝfC¿‘³`“Ñ«¢÷Ts4´'"ãž¡z"‚¯?aù—1½»vƒ­O5>=ŸA°Í!ã².¦µÝ+ï°œÓÇé2fÚCqB'†!‘Ë (ê‹VX îS7Ãi'ŽFìÚ³Aœvœ¸œ;èVƒm:¥ý Gîé†3î5Mò&’語KÝóÄ`1å`‹æ÷¹cÿ¦lòunϾ{/Uò£ÇâÂ5´½&-¼j_1 ‰œ‡[Y¶ã3Ö´Ç4²·f{æhÅ#päúFL$é&•f‚MÞTp/¥ýeþ(«ëù«¨gp¥âßZªˆžØ©j|KˆœF^ãì@°)Õx}ð(&´› 6×oôÉã÷À½¼ÕßõœÙ#ãØÔ'OâØ0 ‰Ü8-A¸]ç`jêæ¾uÝ˜Ñ Öœn$åÛ/”ýmýòeÁac5ª‡‡î‹žJ¿qÜvtPÔôPб8{OãÛj|÷â¤îI…Ûõ¸3Ïu°Ÿ0_ù*±S}Fu-¨Æ÷4Ú ­`SªqØïqÁ&O†êu¬©ÆçÐd/1º¾7)¦•7ì²dôpÃcm{䡪÷•ù|®{Tžû/:ññ¹fv¤â_ž tU_ÁÂ-n]¿•£uÒ0ÐÆç‡r¶ßû={xyRÉá~lUUù=ü-UŸzذ#0¾R€Pû%‡`ª)÷}7ýÓ±^rèq¾œnY÷½“°>?âÚ “¬zoIëð‡ f°¸nnÕw<— @K–•ÿ3ZOZ\ÆÏšXÆŸ9$€¬zo.Jpt\Òë{Ú­×ìb@7ÜJ9оÀ.äÉ»A}äb«l^Š©™h ÷Â÷Ýžª7/lawÊlòùÙ§AÕƒ`×ÊlQ…¯\j‚](C°ÉzÔý †Ø]€2[T—AmUc·Êl²Nôg«ìB@‚-ª« ö‚šS}l`L½¹[†¢(Š*Gíªô~VëLð±eÕ_ˆ¦(Š¢ŠU…yl€`£(Š¢š© Ý‹ë'Ø(Š¢¨¢ÚVP³yµÐ”7Š¢(ªuЋa(¯vGâ3@áƒ0>Ø3@áƒ0>Ø3@áƒ0”F•0@a}´n!'w ¦IEND®B`‚mistral-6.0.0/doc/source/img/mistral_architecture.png0000666000175100017510000006502013245513261022767 0ustar zuulzuul00000000000000‰PNG  IHDR‚ÿ*cWi×IDATxÚì½ ”\e™ï½s‚ ˜ÑˆL#"æ0‘4:­Dâ!2aCæ€4 LäDÍ@¢œ¨ArÕŒ‘Käë£Í‘h`õ'aH‘ÕŸ„E@²¬ ‡p é„ýÕ÷~»Ÿz{ïºuUuÕ®ßo­g%]µk_ªÞ·öû«ç½@0® Ïy,ԛ͹ؚ‹ûž»'~N±Ú{n‚yî†*ŸÓ*³ïb´ÅçƱ33ÌëW5ØûÝf®Ïó¹èÉÅ|Äꕨñæñ‰æñ0AÊf›ç–VùœÖ˜}ãV³í®ø_{îkè½îˆE5,÷yŸ@ÕYn$d–y|^‚¤´§¼nÆŠà–x»Þ ?›æ2j&‚íFTʶž¿çéíæ¹uK¨%3€,3wùY6Å|ó|·yÞïÎ8>ÞïÜ\LK9î¤XŽÔÅTcûæäbzl71!þ÷¹x»ñßKÁéñùuC3pmñ¾&yO,òx{‰r{MÂó:¦í.ÚaÞK·ïq)ïÇÄñœ‹fÒóiç<Î<ž”™Ôçyz|~ã¨>͉ó}±|t›Ç·‘Úž U®‹ã=æ1 ¡Æ¿½äg•µ›í×í_Û÷šm禈 Í@*c¶ HîZ¹¦ˆž fí½•FhW˜Ç­ìlŒïó$©'~¼§À{ìÞ“jA‚,Î7Íô¶OºF×­ÞõõÅû›"¦i? XñŸ’‹G¼ýîŒÏš¦a/&›Æþ<# [$\çIÂf#™}ž n5²eÇøO+H]±Ð̯@O÷ähs?fom¼Ý4ó˜;ç ÞqÜãv<ââ”÷wZPZ·O»¯Š`·y\bº!HÎD–#‚ºöçÌçÕ• ïÐd¬4zIàBów›'#í)r2Û“5—õšjdkK0˜•²Ý Å8-ARf™ì †vWtûÙPD’t\—ÙÜŸWï¯;ázžóäétOWÆŸí½wIÌLx]þä<åˆ`‡yl¹Ùn¹‘ßIˆàò„sÐûé&ê¥ 4Vr$6.³·9~¾Í“ƒ‚¡]'­Xøcè–CǾm5Ræc÷µËkB¶¥Š –ÓIÁЬÙj#¯Aü¸;_›±[ÿýX÷·^"¸"EJm¦qA"Ø$Ï{vÏœ ž¹ ž];°×HÇc òåº nOØÿœ¹H8_Rl,†Z¡š•°ŸÞ~fzRÕkÎÁe×&˜×-+ðþ–Ú5Ô iO"˜ö¾ÙX^n-a¿3©F͇“;;K¨Óç2dÏÉÝ]7Á û¶KQÌ-S·ƒ©ÁЬ`©"h³W§'ÓUtæ¨ë£ËþÙLÛ#…öñ©ÞÛ±æ}õ'‹ÑñÇÙ.¹«JØB"¸&%N†n/°ß)T!€æã†`èL“v<Þœ`hÈŠ‰gØáí{m0t-ÂREpJ,Ynœà ŠàTó˜¿Y)rë/¡ñˆ'ÅN·”ðþv&ªÙw¯OÅ´„÷ÝÎâ99á—$¼^´ÅRßf»&HîÖ¹0áxÝFíÌ£Z‚cFÜešy)„ñFÆ’DqŠy^Â07+ˆëJ8_}Qí ò³o¥Š Øh_Ÿó¼`p™]ž,ùò»2åñ%¼¿’-;C©ÎwA,[»¼ý]ãÉ–{\Y׉ñ¾º®±=ÈŸXG×¢L£]NbF‚4®Œ·›ä/­áDp¾wnãâsp™Ñk 4%mAòX2‹]Š`cÂó‹‚ô1d½žd•+‚mF¤î«P§éóCåw$w•õ¥xz‰ï±2¥ÏÅÇÛIâ\Fol¿Tƒí2Û›pË ì×fB•Q|#a› "8Ö{ÜßæQ}𗮏Á¯˜‘ðüBóü¢²sk,*n­¾åÁÐen÷³:aKÍq|ÑtO+²·?C¨2j+ƒÁ5·ÇûHã·ÂìËf½:ãǺË|'Äç´Ñˆ­Îa]| V0Ý9µÇÏkû±øµÇÛ']ãÜøñ±ì=’òyÍŒ¥SÛl1Ÿ“»^;FT2¨ q‹·ß ùLª Àð˜KXo@k1–· +ÌÎÅü&ÿ•kà:/:2R?²ò™\‘‹oe¤\*[ÜFšy¹ ‚£‚`ïQƒrµ7÷ïò ]S2Ð|D‘Ü<<§I£-nŒ4ó5pÓLC—Ϥq> •«®Œ]Ï|n§M(‚ÿ–‹û›4\C¤™¯ëh¼8Lj ŸIã|*W]»D¤‘Ëu ‚|&ˆ "ˆ \®ä3AA@iärˆ Ÿ "ˆ"ˆ@qˆ Ÿ "ˆ"ˆ@qˆ Ÿ "ˆ"ˆ@qˆ "ˆ"‚€ÒXç:ADDä:¸DDA@¹®DADA®ƒë@AD@iär\"ˆ"‚ˆ "H#—ë@ùLAD@iärˆ Ÿ "ˆ"‚ˆ \®ä3AA@iärˆ Ÿ "ˆ"H#—ë@ùLADA¹\"Èg‚"‚€"P\"Èg‚"‚€"P\"ˆ"‚ˆ TUçÄ ’fŒ#ã†H3_×ÑxÑaDϤq> •«®Œ]"Ðd, Lƒ— ‚¨K\±ë™Íí 9epy“Gg®ëh¼˜—‘ú‘•Ϥ;ë2R®‚ •­yÜFŠ3;ó«¼¥5c|.Ús1Ž·([P)órV9úA€šÐ–‹Þ¸žõÄ wÊ”ÍòøÆßÿ¸±!Þß|ÞZ€ª2Ù4ÔCì@Ù€jˆàüÝÁ”\l7 ôéÞß4زˆ @†êã‹<@ÙD ‰ÑXÛqêNhÓ`Ê ‚k¨ïŠëÓº\ŒMÙŽ;P¶ȳKl¨Ó`Ê ‚`nп‹êÑê2^Gƒ([€´PC;P¶hB›†úòaì‡;P¶h’†º[È{qöGƒ([€40˪ÜP§Á”-@˜Õq}Q·½¹5Ø? vÊe A€j¨Ó`§lQ¶h´nÛ­q=ÑznsêpLì”-Ê ‚#ØP_gêu<6 vÊe A€:£Æq×5Ôi°S¶([€Œ@C½'®ÛãFóHAƒ²EÙD …ê4Ø)[”-@jÌÄ\lgp\×´¨v—¾Ù޵2\cN öq|\”-Ê"ˆ Ÿ¦A\JôU±Á>¯ÌcÏâã¢lQ¶AD`ø¨[Ü’¸^‹ 5ª‹]%{QÐ?ë$P¶([ˆ "@]Êe € ê"P¶([4h P²4h P²uFóç…§wÑU£Bg ÇžÃG@c([PÊV\1»JÇ^0­8u l@Ý™”‹íÁàÚQ7…§÷žWåã/,p¬ŒöýS u l@h‹EË Wç@c([”-d  ±”-Ê@Fepc|Ó~.S@ë@Ù¢ldÃë‰oÜÛë,ƒH u l@ƒÈàt$€Æ:P¶([ÙGë ®‹oà»rÑÐXÊe Dh¬e‹²QìŒoäoäb.@c([”-€Ö`u0¸è|5d  ±”-h2ÎM  ±”-h"Ç7õ0þ?@c([”-d  ±”-Ê@YôwÕ ~@c([”-€Ö`®‘ÁÕH u lQ¶ZOoú—›@h¬e‹²q´Ðü®øf¿ÎÈ @c([”-€’ÁIH u lQ¶²ÏŒ\loúo 5o¬wÆ ìáFu lÀp˜bd ¨ ƒÁ%\ª³yk)[”-¨u ™‹‰¼5m°/¯bÌã-Ê4;³s1ŸÈLtP¤©+DËÔYÊ$Á}*b^.B"K1j/7}ê ÑTÑWa¥Lµ.ca–ë&qð‘SÃwN;…hò8¨íhwÓŸOѦ®™¯³”I‚û ¯q{DǼðýç¬"šPWD ŒÐà†Ô$(c@ã–&7| ®ˆ A·7| ®ˆ Á}hÜÜð©+”5$¸/[‚>u… A‚ûи%¸áSW$¸/[‚>u… A‚ûи%¸áSWê,"Hp_·7|ê A¥LÜ€Æ-Á ŸºBPg)“÷ qKpç®ÔYÊ$Á}hÜÜð©+"˜©8áÜÕá9ß¾m N»ø¦¢¯Ñ6ö5~̽ì'aÇצ¾^Ï—s<î @ã–à†O]!D°Š±äúîÐòÂK¯„Çé{_ÓÕ³9,…?nÝÉžÿú‡{ÿ<°öE·7| ®ˆ`ÃJ™ãüÕwVEÅÞ}û†ì·7| ®ˆàÅ)_ýa¢¼Ý¿é©’EpÇ˯F»xð‰g—v½ž·?m£.¨ˆ и%¸áu…@G8:ïz(/sgÿòE7–$‚’º¤q‡’IËyW¬C€Æ-Á ¨+"8’¡q€踮ëwák»÷äý]©*ÎüÆÚ<\yËoA qKpÃê ŽdhÜžEÀŸ?ðxI“Æ”"‚g,[ƒи¥Q Ôl¤Xÿè–Ó¸>=¦>K™4¦ô'”¹ðê.D ¹hcr.fš8;®ßåÄoŠ)æ.ÆñÖÓ¸%¸á׃±¹˜ÕÀ_:Ô¬qœ´øú¼1—tÞ=ðܶ/4ÆJž¶W7R7ßûûhÙ‹&a²¨3m¹˜Ë—êå’\¬ÌÅš\¬Ëņ\<6jÔ¨grñãú;¢‘;gsÿnŽÏíÖ\Ü߃Å× k™Ž<"‚7üJYŸ{O.ÆSWêlë•É+ïx`@Ävïé O\xÕÀs’¹b“Æ”³|„ÐZ…,UþQ[™µÙ¹X˜‹¹X«¶M,S‰ØèÑc ß•ÿúÝaû”ãóbÊ  §uœ’'|ìÔ!Ù8þóòöqØ;8ÆØýö¯Twçâ¹ÿwåbuÜÆ›¿7c)&4n nø‰e±eºB ‚5ާŸÿË€ˆu?øDÁ%%’&)U5Îð+×þ‚å R&ý½˜Æ¢ÓŸ§J‘$É•$kÒÑï ßsÜ"ûÈ'?Î<ãóáÿüÕðÌE—†ó—® /üΚpÉU·…Ë×Ü;âñµk9\ðïWEçö™ó–Fçªsþà¬O…ÇMÿh$‘º®ý8°Ô ã}qfQѹ¹˜†$Ò¸%ÁF•Aê Ö0üq€¯¼¶;êÞiÃvMš4ÆŠ ¤R]Km(¨YCÓ&›A )±¨¬Œ…ïÅB’#j;òè(;'Á;mþ¿†_øÚ·#±»ô¦»Bìjßø¿ÃEÿù_áç–|+ºþŽ9ó©šÉïøC-$ˆoæâÿåz^Ð?hÜ-$‚ú¢ÝÞ€2H]!ÁF¹Ý:“&)e²˜B´ûb4†2%µ(çúÿH—WÙªv½;ïŠu õÝ‚–vY•ýBÛžµâ§©“Æ ‚ˆ @ù»ÏÏú½ýЉáôωƳ)#…|5~hüá)ó΋&ÛÉËæ>Û1cÆü6蟥tÅ~öFZ š ‘Æ¦TÒxà†_3l$¬j]ÑdÀbìxùÕð´‹oªz9o„F®] ZõNrÙHß-ˆ`ñ˜{ÙOòÊüªÛ6}Í“Ûv$Nƒ"‚³ãÌ_žüi|ßÉŸ9'Êø!VÍŸ1”Ä+‹û¶ñ‡äe ÇŒûHÐ?é ³‘"‚ˆ`“Š`£È`UëJç]åÕe5Ë¡¸þX)5tí¢×ˆ "˜%¼úΞ¼™A“Öôãòµëó^óùoý,z|Í=¹­àØBÜ”Æt¤¡†ÞK»^–ÿ¨ñ]IãM ¡ë«ÆØ¯:ˆ`£5 FB«ZWì,‰NŽÖ?º%šF¿”,³2Ýþ>:†-_¶œ«>H“Ð~}±“ð¤mï^ãgßõã‹&»)4N=EPïk¡óIê‚kEÐÿÌÄp»¶#‚"0¬¶Às*ßo;øðÌE—";DI¡²âÆŽ=`Ü qYBGb²¿‘gÅË5Õ…K]ß´x°Åf.”5ñÁîuvݩሠPFÇUlzêù¼çtI R¡L‚®UÛÆ¨úlh¬j]QY)$VÊÌ釴L›¦Ø·B¥É3T¾nü导ý®¼å7‰åÜÉ ëNêgºl¹T×S+ª‡Iu@u®Ð!º×ÅÛÏ,ÖC­Èéø:½Ö¾/r]o’ºl«®_Q  ¯   Ï×®¹Á!Ê •—»ßþ;[UZÕX³c–ÔÕÍ6rÕøuÏi •mûÝ.}¬DÕø¶ç£l@Z71ÛP¿eýòö©kv(K‰6… V½®¨$uµôÅÃ_óO?n¤ýáÖòt2iËž-çÚ¯/™iÙ|É¥Í|Û: zfØp³7úuU¢ég mëZ‹ Î«ÐµÛ™Ô;MíwN)ã)A¨:ݽZÇ),øN k–Ñã¦4–Áv#»^uöDPY;ÅwRh‚Š4ñJZdÚ>o³v])_¼’2†•ˆ Äþ~íø?'‚j[Ôp÷_g»©Vk¦ÒÁzÊ`MÝ’ Më~ð‰‚Ý•¹K3‰dÒĤ1‡¶œûÙ;ÿye¾Ýã¶‹vÒ”ú6óç¤ÕþØá×7ê [/´B—´`¹º„&]»}²š]¹AlI´žÙ¬€Y ‡C§Ë"3D5dÐÍ,ºÿ|l 1‚IÝ&Óž·'-HìËY%"˜41DÒóþÉjàûl³%ꢆ6¼ ֥ѭql*¿~·ce÷†»î¥}&¨)u=M[ÆõÇ_–m·QWçý®ÑÅÖR¬µúÝ?ýk°?̤ d-–£@ D°åXŒìš¸ÍÎä¨Á>îÀ¾‹®¼‘!ª‹þó¿ÂÑ£Ç(úZ­^6´jâŒJDÐïJWoô³!Åî̃ˆ`]d°juE?( ¨ò]èGÛÝÙNLR Lª›¥ˆ`1ÊÁJg õg]-W‹¡iJÍ$"‚"ÞW‘ÁÊДÿá”é{¢šqxû1®^v ‚ "‚ÅÄ+­k¨Æú¯SÔz‰ Æù“ÚèxiQ­e$Z\k-ƒU«+ÊÆ•’eR¹I*³n `¡®¡’#ɉJEPãmwíBeÙS5›]SÝa+ÁBϧ‰ º‹;”m-t ö»¡k"‚"ˆšèŠ»‹BïßÉŸ9y!ª'|ìÔ–ü®ÌŒÚ쉺¯ù`ÃZŠ ?vÐÏlªQ¬¬¥²B¥.ÈV,ƒã­®ØñhIK¤•Y7îÏ϶ùŸ¨¼ËŠ•#‚VØ4©Šÿ:ÍLªé•étcè4óf¡kÔviËGhܤÅÏšú’YŠÚÇU7ý±~Ú§Þ7ý`dÇW"‚"5Á•æ^µ,ïý{ÿ?ö:òBT3Ž|ïTD°Ú"¨®TjH +C•Š Ð[S5ðl¦^"hg)Uƒ×M£F®/­I{ ‚Ãbºùµµ½ÑêŠ>oûC[^eUR£rë/•ÒóøÓ©¯WYw3‡jb"+X*kÃA_¼ì:W‹]”Ý.v¯q„n¹ýHc'ŠIªwöú´'nCi3”¥Š Þ3;K©®ÕîÓÏNR…–úlU&ô¾©ìib#ýPàO F ‚ˆ`ôyLA+{ÿxÛø7—^w'CT%¯ºYKH ‚#±|„ß«T“ÆTù ¾×S•ñgTƒÝÌ®ó†V…ñq&PçÝ[Åî¡U­+š´T$Qþ"ó~V1íuiå¼ô—Wp’e—HñgÛtݱýµ­Œ¥e“²¡n’'Îöµ¥®#¨Ixü}ª›¨N¶7"Xúì·šüǾ—>ö D>d°²÷/œúá“ß@bˆj„[OlrTcDcüƧ±þLžµA—iH['N瘴"XU lk亢̉­2팴Ƴ2si oÍ2êÖô«†j«/ƒ~Æ2i¬«ºuÚYEíþõ\ÚñÔ}:©îHÞ”u´ß3¥Š ;ÿûÁnï 7"XZØîÃÅ~œ¨Ör9"˜t2¸,OýÔÙo"2ÄpbÆ)Ÿ¶ˆ7ÔýK±rBÙ3ÛptûcŸüçÓjDª‘­mÎX¶&q‚ +‚j0»}ºîk•œOÒó’Sm#AUC[¡ñIÛ"‚ÃB7Îîø|õëêäF¯+n¬œÊªº#ª ³dCeD?¨›g±EËUw”UveK]’U“Ö»³åÜ—ÄbõÀfùtnîxZW0iyÿµH»;?]¯®K:žÄR²ëŽ¥Œžû®°ß3öûCõÊ=ž6#«ê¾öåö«º© mÒ{¦ï˜bïI«‹ ß-_ëNêsKëæ¬ç4D‘òyh¦Â]È`iïß¡Ç|(½ßQÙþÀÇN}sÙ»‘¢¬P™™þñ9Q=vÿ·Æq,"ØÌ¡†šº“©azùÚõyä嫽0tFoøÍ"‚cã§“À)Ô¢êlC”I»¤2ØI?^ø]}“f‘%ÁAd°Ìï½£Nýò€ 2±íÿ~‚C”*+ïh›ô–ÊΘýǽ¥²Ôªß•™iÜÚ™…êæ¥_ÿõ‹µ] [ÿ熟)t¸³Fˆˆ`J؉±Ô½6I•IV¶[ÙAeaý.¸.S+©Ô÷³ö£í•}öÔs“ÒØ¬´Ž©%¢Ê8ªK¯{>íx ½Æmc':rç£çÝùh lÚùhÿö|”Ívç£ë¨ÖZ±ˆ`æE0I¡À÷Þ1Ÿ½$|Û»ŽŠÊø¨Ñ£ßúožõæ…ßYƒì‰¡²¡e"TVTfTvT†Zù»23[Í ¨qQ¥ YiTfFWÇç¸+¨í" ˆ –°Š&…Ñcþ>…"m,©êw˵]ý%iþEMfÇoKÊücj؀Š#(å|ü®Õê[ì|A(ã¾jep5oYáï½ã¾ð°íÄÓÂ1ûˆ©À)'|d`à¨1cÃwžðɨì´úwe¦·úUÖfþ|4# ÆѨ̌ÖKA¬àG8I¡Æ¿ïªq˜v}O+shEΟ9׊`Òä?:žÆbÛ}úÇ—:þ¸u{Þ0‚RÎÇŽó¶"˜t>IcrAD°ÈvÈ`™ß{Çλ,|Çqù®ÁÔqÓvŸõ•á¥7ݵ`üÓ¿~îŽ5:_ª²Âwe†·j$ØI\º¹›xl*tçÖ—‹ÙÔÙ cìZŒi3ÙÚ5(]hÆYÛmßÍ>«ïk»Þ¤ºû'‰ ËÒ©§‡äÌC]8­”Ù¬¢öm— ±çe׿T·N{>vâ›å³"˜v>ˆ Tp_E+øÞSW? ¡ËF]žðæ‡Nùô¾/^ò=)ã±ð›×‡3Ïø¼Æî³@ à{?³”ïJ·4*+¸auÆû¬u”šÕ[l$p.u…@GþZ$^Êþ[ÅvÓT61m)!7 ¬Í¹ó|ô—r¡™x“DR’f÷ëdOÝY eòlwR½.M5K.“Å@•~`E+üÞ{ßÙ—‡‡xnxÀ¡my˼íàCvðg왿t™Â „>Ã/|íÛág}*<øÐ‰oÙÏz¿ƒ ßõßÿ>œòOßà»’Æ-Ê X䯭Rëè+AGB©+"XÆâò#É™&nIê*é„Ï_zBËzøË %ãóEP]AÓ–3rhÍM'|V5ÙMšÐ;×]Õ]±%UAD¦ÌÏcV|ÏC+üÞ›|ú’hLØõ7y펱ûí¿wòñ|ý´ùÿ2¦°ybéuw†Ÿ9oi8õC3Ãý—'cÆeÿÚgŸ—7ïJ·4*+—Áåuˆ %œû\SÙSWD°ñˤ²lv‰ ¡µõœÖ,'X¾¦Í ª°cûÜ$6’B‡† Øeˆ*9_í:µˆ TAÝý¬Â÷Þ1ÿøïáá÷áÁï~_ÞxB×…tÊ Ùóɳþ%þúƒ+oùÍÀùš5„Š`#ôi©v«2N“>ö?ÃÃÞ÷wá' éNxÑLš|Ü®i§¼yògÎ Ïøç¯†š”fñª›[Nö$Æä3]ž2ï¼púÇç„G÷·ÆrØÞ À|’¾ƒo4ó«Þ÷jeüø®¤qK£²ùnŽ4[u…@SÖoµcîÜò ê2ªðf׬¢š-Ô½þòµë‡tÕX=É¢}­D°’Œ`’¨ù“ÄØðÏGÇ-v>ˆ ÔY[YGü{OcÐŽþÔEÑ9üÕß~"œpô Ñ$4I]Ký8èíö¼óˆö×zßß¾qüGfíùØ?œþýçEYE £Æ%J¿vÍ )v:7-Îþ¹%ߊ$W‚§kȉoxì ~ëÝ“Û}ð„wì9àmõ{/ô~é}S·\eúÔEWKð]Iã–@Å#·RWêlã–I­ÛêË`)Ieùì,žI¨[©ºxV*‚~×T’˜JÎÇÊ,"# ‚­*ƒ }/Ö˜CI¢¤¦íÄÓÂÃŽýp8þˆcÃýß>1×”9›ú~Œë?áoö®#Þ8ü=Çî’hI"§~èã¯NýÐÌ]ÿmÆÇ_W÷ÔB¡Ì¤ÿ˜ºbJÞl¼ç¸„G3uŽ¡8âè)¯ëØã'ö¦ºÃ–{îÊœpÈ;£ìždO³¶*Ã÷ž9”´œß•T(¢uEP¸=þ{].ÆRWêlc—Ie•%KZX^‹·+—&H’/eâž}qçì¡Ö&ô»pj’MPãB;?íßm¯Eì‹}L;uõÏç¬?-û|¸/ ‚UÚ_«É`Sß‹•ñ’(jœ›¤H몎ªÛ©f½Ô(ê~ºßøC vA©Ðìœ:7ŸÔ¹êœ5žR=}&Z²A]9wmh*A!®èæ¸ÒH`WƒH u…@ËœMTÒ§ÐBóå¼V™6½®QdJ™ÈF:7ˆ`vÍßÅ­ðþµÊ½XÝP%’+É£‹¿ÿòåá™K¾žòÅ¥Ñ{á¢íƒŸŠê¼§œuA¸ì»7„Ë¿÷ƒð _Yõ‘Ó£®Šéÿðż}ØãLþô×¢ã×br¾+©P…8éæè~ÙìÉÅxê A¥L4nÁ¢,nä{/Z‚Ç.ÅS(4—fhv\×õ;¾+A↹96¢RWDà¾,‚­"ƒ|ï•)‚®»¾þí¼ë¡¨{»g"‚4$(ÄÀ’øÜ77 RWDà¾.‚¾ ÎÏêû‡–.‚Åf^FAâF@ò· ©+u–2Ip_@‡-ƒj1—vkõBÒ¤ÐLÁî1Må_ðÝÛóÖPuÛÙ’5nZÝ5×?º%š¸Jº¹—ýdÈXi÷Z=§%l´­&ÏÒ¸å$TwO÷­ÁªsÔÿZÿÔ{!ÔdYK®ïŽÄQû׿úÛ®{é~½néî²Öª×óþd]v{]Ÿ¶Õ¯:†þÕ{Rî˜r¾+iÜbê ò^ÜÁÖÁýÞÛôÔó‘<Ù™‡Ï_}ç€TÙÇÏ»bÝÀãNµÔŽ¿¶ªCÝ6“2v>ñÌÀÿ÷îÛ—(‚šÌJ³3»ezœ&¡ÇÓDP"&aLB;Q“ºî¦N%wv© û¸Î[Hötþþr>vffÍ@Íw%[‚BL]!¨³”I‚û"ˆ 6Ì÷ž›xEë¦*ãg¥È -™ãäFŸúõÎõV%P0eŬ¹Lš¿Vª¶QÖÌuï´"(Ñrû°kµV’tû:ž2‹ú×ñè“Û¢ít _r/¼º+¬ŽW6Ñ¡÷àÆ_nx_$Ñ:e kW¾+iÜbê A¥LÜAd°a¾÷$2eüô˜ŸAsb¤5EÅÏx1 ³ú[çádX(‘Õãê«÷—ïJ·…˜ºBPg)“DsßfÇÇ$ò£k>Õ’ÁÿÞ{öÅ‘ÈHŽÔUÒáäLb¥î¾8¹¬ÄÑß§Æ Ú¬™µ¤ ™ÍÜ9$‘.KY‰Úe&ü1‹IËMHpÅ·nÏ»>וÕu“õ…Qò—ÔíT]cuÚÐ4n 1u… ÎR&‰æ½/Ì g®$’cvïaY‘ÁÿÞs]%„®Û£DFÝ:*æwu¤™>mN¿ë¦Ë(Á´@KA{zÞî#é|$¸Vm¶Ó]»®%I.%nY‹R®64[‚BL]!¨³”I¢9î ®ûcWü"?æÐ}, 28âß{g~cí‰\4NÐɒƺIeÔ-Ò½îÉm;òºH¦u½T–±ØòVÝì›îØš8¦\uۆĮ«~·Q×åSÙG7®ÏuOÕuë9e'íû# ö¯A¯WWV½wÊ*úâІ¦qKPˆ©+u–2I4¯r/j<¬ vp/®,œèøã]·Q‡&Pq¯±“ÊØÇ%~.;&Y,e?;FПˆÆÊg¥cÕÕ.¡¿¶û¦ÿçÏ|êÆ:tí6£ª¥3üqˆzmZF’64[‚BL]!¨³”I„êÉà®&”Á†øÞ³R§ –›ÌÅN˜¢l™³'¹q4½ÆÉC§ÇÜxÂREÐv3µ"e×-,gAùû7=•7nO™J›­³ûðgµã!ýq€ö|ÜØB7ÎRR¬ÙIDK²­„Ò†¦qKPˆ©+u–2I ‚€ 6Ä÷žç²r¾Iôü×Iˆül¢“F­1XLÔ ‰ fßtRé&p)W%´V-z½/hÚÞe"í’v͈ê¿Fɤ­#h¥‘64[‚BL]!¨³”Id°a¾÷$D—tÞ…¿„{µo윿7»¨Ž‘´ØºŽ%ÙÔ6Êdj#tÇ÷¯Ç[Ïi[=æ¶MšµS3¢º×(‹i'Éq¡ ^’Þ»o+xiuoUFн·:kq ‚ÐÚ2Ø"Hd/ADWA!fÇ_’ÕŠÞRD2É”\l728$A€æd^ü%YÍèCAÈ,3Í÷};"H ‚ÍݸëŠÿ?ÜØÀ—."™Å__p|µÊ"H ‚Íݸ£±Àw´†¶U³l!‚"€"­!ˆ  ‚€B‹I "X…Ð:„Z³P‘´> "È÷"Ômh$ ̼já÷λªé1$€ayDï-D¨ÛÐh˜iÜôÔó‘œuõlFADø®$°DÐ"‚ˆ ð]H` ˆàu]¿³Þg_Œþž{ÙOžÿʵ¿»|"|¸÷ÏQÜûÈŸ¢n¤þ~λb]$’v;ÉÞñ_ú^Q\uۆ踊Ó.¾)zL¯ûæ®tËÀ>þÀãÑ>Aw4€ºÖC3+‚I8IÓ˜Á4nÿí¦}¬¼å7©ÛI ‰àš{xì–õØöÁ'žIݧ qGc¨ÛH`ÝÊV«dOùêWVnÁwoÏ_}gøì‹;£ÇvïéÈܽ¶{Ï@×ÒS¿Þ…DQìÝ·/üü·~–(‚V %Œ.{¨c9”}Ô¹èœÜXÆ^z%<áÜÕˆ ;‹@ÝFÁjù³Vütˆ`:}r]ChÜÑXê6ˆV[m÷LuUPL¡/cÚF;tc ÅÝõAI¥Æ*{(43iÒ¹i¼áåk×GÙ@Ie—Ÿ@A nCcJ`ˉ ÍZá“°9N\xU”퓤©Û¨&“±E'n=?:k¨™TY@=¦®¨÷oz*Ê>Ú®¥zÞ¡%'Aw4€º"‚†6uÑ”ÔIÄ쌞’2‰›ßUT]CÕ¥Sã÷ÄŽ—_ÍßV½4”è¹ãkVP=&¡t¨{¨–‹P¨K¨ëRêÖDhÜÑXê6ˆV抱ý”í³ãÝ$0šÅÓ¡ÅæõzuñÔ’IHÝä.i ÊKòìrzLòèºúç¥n¡ˆ "Ômh\ Ì´JÔ$uÿ§L Ë¶©K¨Æü¹ÇÝä0:…Æï¹}h[eò4¨¶—ÈùY;eÿÜkíä0:¾{ÜM,ã&¡‘j ýßUDhÜÑXê6ˆˆ ;‹|WP·‘@D@hÜÑXà»@D@A¾tø®$$ADø®$$ADA@A@„V“@D@ªÈØ\Ì¿øŠEWD°³„cŸÎG€BKK "XBh¡w…˜çý@ ± þâ+'fWéØ G踀"Í'ˆ ·½˜WØÇ]=›'×X™ÀÇ€BËJ "hâéçÿ ßu]¿CA€Š™bd°'ãàœÆÇçÒ¬D°´nß­H "8Üpø"xÊWEÇ×"xˆ @ÓÉ Æ-vÇç¢sšÌÇÐT”Ûí»Õ¢od0 ØÔ"¸ôw…ëÝ>Üûç(ôÿKô«¨‹§ßåÓß¶ó®‡Â“_=ÚÅ7EÙ>Gï³/F»1ú¿Bû.´ßŸx&¼ñ—öëBûqûÐk¾ùã_‡=?½æößnŠ$Ón/áÔ~´?w¾ÚîŒekAd°l \g$p  @ÓÊàrbHlÆfV$°iEP"—†íÂ)ñ’t%±ãåWÃS¿ÞžóíÛŸ×1i]CO\xU$hI¼òÚe´ǽü)ñ<\¶Qÿêï$öîÛ+"€ –Š“À $:5%F­f7í0ÇÎû"Ñ>;èŸvK.¶æbs¼ß™€úˆL›Y’À¦AÉ›CÙ8ÉÑyW¬‹2yN˜Ü¶WßÙ“'r ¾{{”‘Û½§/zLY7eäÔÔ!ÁÓßs/ûIªÚ âÝõF²§×¼¶{OôØK»^g|ùûCDPëoóý›žx\ç¤m¯¼ã™æ®`äÆÐ$13(½Ë×òÃ|sŒJ…M ÀÇŠœÿÚX ùE0kØ”"èdð«»dËϺî–Ûv¼ýýä¶y¯—pýüÇófM#è‹àÉÝɦÌÙm—\ß=°½Žá‹ >…ºzúǼùÞßGKTõzu[uǴ׊ ƒÍ(î½°Y¿{Œ8=Ô'#8\o„Šç‚Áb5¡Çæ¹NŠ?@Ó‹`%°iEÐÉàåk×G‚æfü´"(q*gÆÏREð+×þbà1ôÇ ºl£“D+‚¶k§ÎÑ?¦2–I]G»|"<õˆ 2XrÃH(4ÃZ3ƒÒ3€zÏÚã(ôþ+²ÝpEp¹y}Ògi?ë°J2î®§š¸÷‰Ydl= lJ”lÙ®™n‚×5Ô‰ ­jŠ ;‰›¿×S]Lýí%¯…DÐM‚óì‹;Ç JAd0ÅFç6É{RŠj¦Óû‚¡]/ïóäH ´uÁàz‰v»)EDp²ùl”IVàœ·š÷yRÊ6 ‚¡YA{­óSö¹Á{|I.v˜×íˆKÓvO“Ž7>š¹ìm’ê)‚Y–À¦A‰’(MÜ¢Çí8?IÖ ç®ÎKh÷¡ñw4t³Œ–*‚6kç/>¯É^|i³"hgMAÛuTû×¹»,£h–ÙCA€úÊ`3J`)"86ÈÕ#+Nn­Ä7âÿ÷)T÷Íq)"8Ñ£XwÚ¶ ?˜Æx³Ýæ Ep…Ù~gŸe\Y¡Ž gZt³Éî š+‹ P̺6¥Þ²þRä$P!ñò…ëÑ'· Œ¹sãíš$Æ=îöáÐØÁB"¨íݤ0ÊÜÙå"´ÌƒC]HËAsÜôÔóQ6Ñ.ƒ¡.°I,"€ Z暆ýâ&{/ЉàÔ\<Ë3ØeäÅ´³Ív³ã×wƒYA+‚sŒ@öÅb†yíš"Û> fñÊÁIFb»ƒÁIgV™sm¯@Ï6-1ÒºÑ4xZ][A›R­ižDËJ .Í&j—uÐ Ÿ¶ éš{د˺Iò´_'rIÝKÝìžn Ÿ¿_‰fÒòÅDÐÎrªsÐä2:–›ôFÝN•éDA+}M*¥ˆ „hz?ÉŒ˜`Þ‡±¤Í’ÇÕY´ÙÆ…ežo1tr·«´‹ˆÏ0ÛOþrD°Ë¼G–9f[Ö›„VÁV‘À¦Aeä\¦Ï!‰“Ô%Mâb—‹°¨Ë¥•*_&œ¥3”´%íWkÚLe9"¨, ÍxZ$g~c-cÁ¼Æ»“ŸeMú”"‚jˆ©›¤²Vv\›A±:H^ÆA³‘ÎKA·–p¾V®º l7Îl·¥\äÏJºÕ„{üš DÐ-yÑçíÓv;=ª -*‚­$M)‚N˜4‹¦dìÒýj {¦fåTØn nìžäPÂ¥Œ¢[#Ðß§–¤Ð6¥ìÓíW™C·ß¤ñ{ÚÛ‡Oýß=nÑI¢;ßU·mˆ2›Í’ Dê#ƒǶ+~Ýê&¾þb"ØäO⢌–2akD0ˆ%¦Ë¼76æ¥ˆàæ ¼>wšóg_j$j¶ÙçºDp‡'‚«Ì¶ãÇýXZD§Á7Rö¹!`œ ´¦¶š6­ˆ @+Ë`V$°´B4Ù<¾2AÇÇò3.Žž$u%ˆà5ñ¶n<ŸÆ[¾Ó¼~…iDö™³‹ÍŸžp­¶ªÍ:\l›îmÛžÒöߣÙAᮡö:'ÆЊ"ØŠˆˆ @“Éà#·f຋‰`g‚䨑¶ÕÁÓSö3Ѽ_ "83á±Eι=ÈÏ8®¥Sâeg4uÙ<ÇÔ ¿[éØ8V$ˆ`{?YÌ8¯á»+ÌÜ- †N£1“÷…'‹q݉ǚm·Ç¯hlU D D ‰dÐ>¿.(ž¹Ê‚Î ò×Г(í ò× œ¿=A~wÏ FØvÅ"–&‚â‘2dhVÜýÔn¯a¹Å»žñµø]C…Ízîô^Ûc>ÿvïýx.>·ç‚äå#6zÛÚñ+¨†ÐB"ØÊˆˆ @“È }¼+#(”Is–,.ÐXqÒÕ_¿}û§÷ju08†ÏJ“í^9×¼v†'¥ÅÎÅ"ëL8ž^og5Õx¼ó9Ún£n!÷[ã×ùYÞ%žÐi_k~$8ݺûâós×c×–ÔkoðDv{0t¡z€,‹`«K "H ‚M ƒ“Mƒ¥š‹Ð7cc±)õÚ'ÅÛ«ÓùµÅdzãì4«ë–8Æ&lßVÁþ‹ý mÊéÚÙ¿W­$‚H "H ‚M!ƒ}-.Í.°m¼ #‚H "H ‚M%ƒ›‘@€a‰ Øb"¨5µ¨»Â.ôN ‚Í@{Ð?v‹©ý*A$0"¸äúî°ûÁ'JÞ^‹²;ü…Û D²-‚H`DðþMOEB÷pïŸADÁ‚M$0#"¸mÇËe‹ @ë‰ ˜ìêÙ¾¶{O$‚;^~5úû«»¢çÖÜóðÀß Øûì‹áƒO<~þ[?‹ÓsŠŽ ®Øßñ_ú^¸ò–ß„>¹-|rÛŽðößn O¾èÆðÊ;ˆ¶½ôG¿Ê;¾öuï# Ÿ}qg”™<ïŠuÑcIûžñåï‡7þrcøÇ­ÛçŸÿKôºóWß9dîµÊTª»«¶ÕyèyíOûÐuH€µ/m£×!‚€È¤76‘À ‰`êöég ÷îÛ7ðüY+~šØ5TØóøÓCö'Á”Œ š—h÷ëDúû–Lºóñ‘ع}JXW»O‰¤;$t>ˆ ‚C›H`ÆDPB÷Êk»#’héïs¾}[ž e÷$\ÊÂ¥´¦Œ›„Qû²òåDPR掫ŒäÒÜž±lM”¹³¸}¯tËÀ¶Ê*jÛŸ?ðøÀvg~císpÇÓ>µÿË×®؇2€WíGÙÈfé‹@½E Ì #hEð¤Å×,Æewïé‹DÏm{ÚÅ7 Á¯\û‹Ç¾ùã_çí{ÓSÏçí[Çv¨»ªí†úÂK¯D»®ŸVï~¨7o¿WßÙ3ð\ç]E¢ªÇµÿfY€zŠ ØÂ"¨ŒY)³†ºí%rþöêjEо^Y9»­$Íî[YE‡2ƒz­ ÿ˜VýñˆZûÐïŠªì Æ&ª[¨ÄhLfÇ_|õŒÿÅ13uÌŽ(GWäâ[-pî¸=D[\“ºLÁ¤íÝsI"xê×; î[’V íßA7é MF£ b’PwÖF—ADZ‘yñAT£‚`_‹\ëÞz÷"ƒ™Á¥H "XªºÉY4öÏ •fêô»†jfQ‡º‰&­kèö­ñ|iY>?¬º±ŽI¡ýªKªf ucE£ÏŠ@ËÞTs-Ìðœ:E[ܸå˜Ù8æ4#,u>fÖ¯Ó;.“l‰`Ø:"¨™5‡#‚¶K§›yô„sWGÌø"hÇ *Cç–‰ðgÕ¾5~Oã…Æ!Úå$4‰º‹º? ‰ [zB¯Ñy¹Ç5‘ŒÃ_Žh›ê¿åâþ:…kØrÌló#HÿVçcfý:½ãÒ8É–"- ‚êi—YÐU"‚štÅf×^Úõz4ObçÖ*´ËG(çè¹q„vI’©óÕñ%tŽß½½¨*h¯SYI;ÎPçÛè“Æ ‚€"HD¡–,A[G­ uͬDš‰Ó-ÇàÖT–Í#¨tifO‡„ñ–õˆ2w7û¨ºšÚå"¬@ÚYG‹u Õy'­]¨ÙG›aQyDA‰c"‚ˆ Ô’ñ¹X‹‰¼ÙA'pÊI¤Üä-n@Ͷ™4ÆNÏ)\7K»d„^£µýÜsʶYÔãî9eµ×=ÔfÿüãêÜÔ•sÕm¢î î5.ܾö|lè5—¨ë•DjÛU@‘2DD*¾gýõŒO7V#n¾÷÷Q¶MÙ?;©‹„Ðqå ˆ¢ËjÜŸ›\FBæÖÔBô­ôþ•§žÄw- ‚ÇDA€EÙÔð÷| ¯•$Å®÷ç&Q¸®˜'xòE7tõ”èÙ®™ê‚jÇ*ë‡üåÇÛÞu”û®E5DA☈ "ÐXLU}³ÿ»§œõ-%*êZ*©óÑÄ.nBÛÍÓNTã jœ`3,ð^Ï8æ³—„£ÆŒ ƒQ£´Òª ‚ÇDA€Æ£KubÂÑÿ}_+J‹2š|E¡q{…¶ÕX>uUFQËJ €Éqð»ß×ÿ=;zÌ©^€"HD“ö\ìR½Ðˆ 1œxÇq¾cG­2Å$N€"HD˜'ƒ‡ûáð¸Ï¯Djˆ²BeFe'’ÀQ£_Ë"ˆ qLDhÜÿí÷¾gÎQR¨¬pÈ;¾‚"ˆ qLDh.ÚsÑ7èßš0yzøÞÏ,EvˆÄPÙ8ô˜©¬ô¯Žõ»¸  ‚ÇDA€&cl.–äb'BH¤e>rêÀwxNßÌý»,.;ˆ ‚Ä1AD ‰iËÅê\¼áêêø#Ž üÄ—Âã¾ð„¨ãȓϱëæbÔÞÜ¿qY@$މ"‚¢=®Îî÷¶CÞÒìGúe)ãqô§.Šf“uccÜ àdª"È1AD ÛŒÏÅÂ\lLýuRØþÉ!S˜…Ù?sŸaûìóÂÃÞ÷wá~ÚÏ:ÏåbiÀ²ˆ ÇDA€–djÐ?&l£…Ñc÷KcÇÿð\Æ6QL9ë?ÂIûŸá!GM Gïw€'£þOПý›0䘈 "1“r± ÝO¨PFéííLJmüTøžÓ“1lŒŸº{JÖ5ãç¸Ã÷³~Ší¹X“‹9È"È1ADŠ¡î£§ý¤¾`Œ3ö-M6¢n‡‡ÿÝ?FBÂâõµ_âaÒIŸ Õ}Wï}î3H¿¾\lú»}N¥ ‚D`8h2‘¹¹Xô¯O¸kˆŽe¥Ô-Q“’HZ$ˆï;ûrD®Ô,_N¦'úkÑŒ®Ê¾J´5Ãë˜ýL’>…&þéŽÅof.ÆQTAމ"‚PK”q:;×äâ‘ÀëNxÑtø{£.Œï<á“áó¢IiŽùì%-'{c ò»?þ…°íÄÓÂÃŽýp¨÷&aB—$é»/èŸùõ쀌@mEpNܸ­GÙsÌl³ÃÜÀçÔù˜Y¿N︈ @c 1hÓâ:¹"kƒþIhvœH‚œ8)Ê~iÑû¿úÛO„=ãÓQVQ¨q‰’Æ)ÿô†;›gÿ›ÿñÏ‘äJðt ºMºsPÛÑá~ã ÇŒ;¨˜ì…ñû¥÷MÝr•éSÝvŠ@ýXX—5A#³ùºhxÆÇ’(©Y’‹rqO.¶ýãÚʪ÷ê*)±:àжH²N"]¨{jÁ8á“CSWL»…²uî.tlŨѣ+ùÎRæ´7èÏîIö4k«2|3–sh(\^çè䘙:æ¼(G/²®®ÓšŸöXgÆRä¾OÔíT³^ju?Ýè‚:‚±=>·žø\uÎO©Œž2¤Z²A]9Ûø¨*cl,Scy´1'–/‹ƒòhZêíÃcJ||&g€feBпÀôxÞ €ì£‰´Î•fËÓL~8žµ=`2€ªÐäO®iÓ§4ÐùIL{ãsë ÈZ ‹E±`í úsï1Ïl€ó“n÷D¨‰^_,Wn1wu½\?¦çæàùM7Ø“ð72Pí¹ØKÕŠ„çWƒ¸å#p~AVRÇï2Ò7(ÉÓc±Luý“Å$±0Ì®-°]µ™k$p]Âq‘A€2q]?{K¨ÙFÊ6ýËLÔ’yF>WØ(‘e±<í JŸtš‘.Éc{Îmq0ØuE Û#ƒE˜ N3»Ì×j ‡Íñë%_3ª|nË..ãuÈ @ar]<—V¸IÖ=Áàò§WéÜVA=»ÂkC ×·%œôe8hâ–N#n‹‡¹¯[ã}½ôO3ÑEbÙrY¼Gª(HKƒÁ®œ×åÏ(ªóp“Ö(»ØQ…sBr¬ ÇõMªò¾5ÃçÁàZãÊÀž 6ã ‘AhiλpvÔ賂ÁÌ`{ ÛO ú3“N§ÔàœAhI¦ƒÙº5<ÎÚø[cÉ+Æ#§Sjx^È ´Zêá¹X‚n¨áqÜ8Añ›ZâkÚ Ý”?¶ðXmŒågC EKköÅ1g‚¶niuÕl«Ñ1”ýîš„È @X vÕœV£c´Å’Y5 ‘A€a03èï¦)Ñ™W£cŒ —}P÷ÓqUØçô`0»ˆ ”H{.vÄ‚³¢†ÇYCÑT³Ûi2P:™Çb±é®¡D¹ejÕíÔÊà55~ÏAhjÖÅBÓ[C¡™ v;=½†×bep520”e±Èì j·8û#gËêpMÈ @ sbQ¦nvŽ11[âãÜZÇk“ ¾ äËËp×ñ+†ÆnˆñÈÒÜ`°;*2-Í„`0K·¶†Çq ÓKÚFèZ­ ®D Q–îž öY:»0ýô¾f+ƒ‹‘Ah5VƒYºI5:Ƭ ö Ó#ƒ%pv089LG %hg|œå výÈ ´êžéfÑ\P£chìao|Œ®}æÇç‡ @¦ÑD-ÏÅBrCŽ¡±‡÷ÅÇx¬Á¥g12YF‚¶1‘ ñßµà† öck%ƒg#ƒ%Ü[ƒÚ-á° >†ºžÎh¢÷ÆÉ Æ ÎE Ø%¦Õè3ƒÁ XÎnÂ÷€Ì`­VK8LÎÅŽø+šø½B éi¯ƒ I`Ü ¡ÝAíÆÖ‹UÈ 4+’ŠÇj,hÚç=ñ16gHdV7‘ jûÙwëb¹è­¡ 9a’È´gìýsצq• *ƒmÁ`6vE µYËÁÎX2j[½¯¢„ –À0þ< E™c­V];‚Á h². (ƒ¾"‚a\Ð?ãg9Ý:§ÄÂ"1XZ£ój7²²ªE>‹Î’A+ú÷D ;, 'a™ZÂör±%~ÍÚÓøø|ÂX@ƶÈg¡ë\×2èK þ¾/þ{&U ùY vû“|Ì/"*.3ôHP›ÉatŒî öÐ4‹ N«³ NN@±ÈgÇ ü7Œ®I0·öÄaRÎge|Œ±”´"V·µ›ˆ'Iû$È3ƒÁlм`pìŸßUôì ö³wÚc´ºpŒ¤ ú(‰Ÿ›F•h~ÚãþÖøoeáÜñ®«èô`0c¸ Fç1£Ç@ 3)ð‰ ÏmÏ£ ²áº„:4“è æq'h7ÔP@\6ê>’!ŸOOe0 D c8 óÇýÙ®¢‚ÚÌÞ©±ˆ.y_Ð:3„–û´ "‚Ãÿš‘𜺊jÁø‰5:vW08.mEI2¨å;Úê||7‰ d'csë|Üñqw#×å±Ye°·Î2"‚Ù⚸‘¿¸ŽÇœ Î:‹ áeÈnQùUu:žf!ucòöW$ƒë(ƒãƒÁYd #¸õûn­Ã±$-nršNÞúa½½u’Áö ‰È3ƒÁEåk‰2KnbšZÍBÚÊ2X«ÉvA€ R¯†þ­ÁଗyÛ«.ƒ=±l#‚P”¤Eå«Íò`pœ3„6— võÉ@I[T¾œ Î:›·ºédpf0Ø2D¡Eå‡Ã´`p†ÐżÍ5e’zÉà8D Q‹Eå•¥z.ÞïÞâºà¤MÑ^¥}Ή÷×ÍÛ -j±¨ü¬`°Kè$ÞâšãÏ"Z­î¡ó‘y€lR‹Eå'1ÑL¡LS? ¬æº‚ˆ @F©Õ¢ò’žxß;‚þ(¡y$È03ƒÚ- .Šëâý¿Tw"X[ Ëâý¯àíÈíAm ×Z…«ƒÁ‰L–ð–7… ·ärÞr€lQEå…&£é‹sM|\(Ÿ‰Fõ~jŒçüE"]j¹¨|ïwr08CiKÆ8Þú²Y`ÞÃzÅBÞv€ìQê¢òNèfýkÌ)k¤®žÊuæbmпø¸ö§®¦»ŠÆ,Þú²oÞózQ\Àµ ]±ØïGû¼'蟅R’A÷P€bå0„nu0˜9R†pfПYlÏÅÞZ€ÖevP» -ˆáÇ—rñã ?3¨;c9ü ¿âùÃØ7ëR߈úõ †yAý'· %FÚGã”úFÔ+Fí¥¾@£­Evð‘SÃwN;…h¡8¨íh×@O5 ¾Ô7hÁ†éóÂ÷Ÿ³Šh¡P㔆)õ ¾ S‚†)PßêÐ0%h˜õ ¾ S‚†)PßêÐ0%h˜õ ¾ S‚†)PßêÐ0%h˜RߨoÔ7ªÐ0%h˜Rßê S‚†)õ ¾Ð0%h˜Rßê@Ö¦§|õ‡á9ß¾­ì8áÜÕu=Oo$OÔúV8kÅOË®ks/ûɈ~/èœA€Œ5L¯ëú]X j(Ö»a:’ǧaJ}«FlÛñrÙuíáÞ?è÷‚Î@A¦Ô7DúÐÌ Ó ¯î »z6çEï³/æ5Dýç\‹Ò0¥¾•kîyxH]Úñò«åZÿ÷Ÿ¿òŽAêÐ0­–°QÆ2"‚Åú¦ŒßHfÿA¦‰à‰ ¯ WÞò›pý£[Â?nÝ>ýü_¢íÏx<šl"í5—¯]½fÓSϼæ–õ?ÿ­Ÿ•%‚§~½3:gßüñ¯i˜BfEPå½ó®‡ÂŸx&|rÛŽ(ƒßóøÓá¿Ü˜ú‰^£çÝkTOïßôTT_NZ|}Y"xÞëòê[£M&C}¦uÁ“/º1|öÅÇ8©;œ/uÅ^£†n)"¨.ª’H۵o¢a ™Á¯\û‹pï¾}©õæµÝ{"Qó_³{O_Á×ø?¾¤‰à‚ïÞžw|‰åñ_úõ  ÕDP™‡›>¹-j̪qiQF½FG‡$îößnŠdQ™A‹mÐ&‰ –ÐñlƒöÌo¬¥«dR•¹³õê…—^‰¶W½±r¦ŒŸ{~(±¯QÝS]SóÇ&Ú%Y’DðŒekòö¥º§Ì>õ  ÅDPL5:…¢j(¦5Z—\ß=ðœeÆ—¿Ÿ·OuUÃSÿj"›B"xï#*˜Õ aJ}Ë’ª>8yS½³¦ú•Tgíã’@_,µ?u-Õ¤4öÇ_ݶöœF”@êÐ0­ãAeüî˜zL Ç%w'Š ² z®Øä/¾Z ”„Zi¤a A†Çêuɶ)žÔuÚŠ ê‰Æí*Ó^Lâìw€Ðýàã$0i\!õ  ÅDP¡ÌždLR¨»¨ß5ÔŠ íNjQæA UC*&‚uImäYDi˜Rߪ=k¨~xQºùÞßYîÅŠ Ÿ™·R¨.¥ª¯Icj ­/ªo˜5 ÅEPÝÉԳоªqšÔxµhVCÛU­Š»ê¥a ™AýØb'G²b—6«®^óÊk» Öýc'})$‚~Woê@‹‰ ²€¶û§ºiŠ¥?¸+jˆ*×$‚ 5:Õ@í~ð‰ÔFª2…DPògI[ª‚†)õ- "¨L¹ÿc‰2zêê©ñ¹…ÖÙT]Õ/ÚÚ ¢«nÛú ¬¢~ð±ÕØÉe¨o-$‚Z¯ÏJ ?̠ûÿÜË~íÏϺÉd|T—8=.‘tèµ6•= Sê[µDÐv©ÖY¿«h¡:«záê’N?š¨ž¿´ëõÄÉdìw€ÄQ1éõv{mƒ´ Úí$aþú‚¶»šÁóWßm« ƒÂŸ5T ÔRDÐe<ô¯Íp\ú£_Ñ0…LŠ ýaåÊ;È{îòµëóê‡&”ÑãZÓu%•HúûÔRÅDЮ#hüQ½ó'­¡¾´€ÚF¡Ëh\ŸÆù Æ«[ZÒäêÞ¦¯%€ê*j×´“R¤‰ kìṲ́/—4L©oYË*3§ú¢LàÕwö¤ŽÔ0uÝVSÕ7Õ_›ásu´*³hÇ(*#´˜úc}lƒQ P÷:‰_±Ée$tv6ÃB"èwY“Ò0¥¾#hQù·u@ãÝël÷éB3¦McEÐeõ-g­ø)õ SeÔøsQlÖPu)ó'±PCTY·=fšj<Þ¿é©!W([¨I`üÉ.ÔÍž“ß%M]BÝsÊF6Ò:g4L©o…BåÝ•Ýb³ß*Ûn|Ñ*šÄEõEÝ<Ý~üC”Lš©WÛJúü‰_ìw@Òr:¦{Þ¯H}hÒ†i%¡1IjŒ–+`’C½ÎEߦԷj‡~Q})¶0¼ÚÞÕ57Žú@Ô aJ}#¨o4L ¦Ô7‚ú@Ô aJ}#¨o4L ¦Ô7‚ú@Ô aJ}#¨o@Ô†) Sªõ ¾ S‚†)PßêÐ0%h˜õ ¾ S‚†)PßêÐ0%h˜õ ¾@†¦3¾üýðœoßVVœòÕÒ(¤aJ}« ¾vñMe×·Î]MY§¾T·aª†f¹\×õ»–jüµâ§áš{¦a îo]=›Ë®o­ôÃËñ_ú^øÍÿ:¼|ízê"82qòE7†Ý>]óý¦a ˆ` CßEOnÛÑÐß1Ô7Ȭö<þtÔX-^ÝÕ S5Fˆ T[_Û½§h]St\pmKÔ·m;^nø›¨oYÔߌB¡ö"(ñ¡ž!‚M'‚Ê ÊžùµŸ×x RR@u SWÌó®XWð¸§~½3³·é©ç^w÷C½‰™ÊÏëgÇOç—ö¼þî}öÅ÷eÇ˯l—4áÎÊ[~Þ¿é©è|ž}qgôÿKô«Ä‰>4aˆ=æ‰ ¯ o¾÷÷ÑëtMz SDй1¯>©ë²}^u©Ðó'-¾>ì¼ë¡ð[·GÇÖ¿*ªO…Ž«ú¨z©z¦×©Œª¼ž±lÍm¯¼ãã'•c=æ?ïê 2¤Õ==¦ý«ÿnÛ´ï.û=¥ÿ«þ¹ºzï# ç^öê ‚删ÆþŠÿҮ׺±é¹§ŸÿK^£WÙF©-5<%ŠI ÉÝ{úR_'!´¯»¤óî‚ ï´ç ዤ$1 ½jx¦½ç:¦ÎÙrûo7Ñ0EëêÞ}û^¿þÑ-yÙçü2¤rúÊk»˨êÓW®ýEâÄ-nŒl:žê}2玤Mìõ»çmôñ3ñÊÚëLªÿö{ÆÏìß²þCêë‚ïÞN}D°ØA?«æ7@õ »WƒË6µm`ê×|û¼Ž«FŸÝ—2 …ÂÊ è5ʤY®¾³§."¨L‚Íbˆ^zeHCSÛìŒ}Ï“µj´Ó0;–2FÐÏr«þY$p[ôフ!ýa˩ʣ„ÉþPã×Q¿ CõÍ ¥^g3jµAe}UõWÙ¾4ôë›êkÒNÔ7h9,F’LIØ,ê~–&fŠ%×wç5æ¬øœ¿úμçìDêÊæÐÿÕ%Ó=§ ˆC C×%³R,eŒ môªqmÒêNgßN“ÞseR••P—7MϘ¥ÖÁJfè•´ØLºÊŽÝ§/f •=[o¬$Úz£cÜãš©4íGÕ;7»§_¶+ÁRƪí ðè“Û¢Çôœêº/­úŽIªÇîµêÖª÷©Ü‰¯¨o€zS›áó]þ/î¶ašÔX´I×õL ÓBݹÔ(TcP"%!sǬ•JÚÒž.4nÐ6ÐÔúï¹/Ê4LÁB“¦ø>˪Û6äm+I²åwU™´eÔý¸bË®~Xñë°~èÐ>ÚŸíú\+T±™Tû# }×$ÕW_ý®ÚÔ7@s¨¨ÇÒÂï>f3~ãT Ȥ5Ðl—4s’ŒÙ° :MJág ERC0)j%‚KpWÞù$M £î ]Cµgj¥aÚ¼"¨,r¡º¦H[CP?xø<øÄ3C¶SæËÿÑÁ¯o¶k¥ËÎÛó,gÆÜZ‰ º{&LûáEò›T•AeÖP aZe)ñ»g¥5Ølƒ¯iãˆJ=§Z‰ }NãŒÒŽoqÙMÿ=΂áMÐ0ýÿÛ»{IŠ  o-„°ÖÇÁÀÛ€¹&ǃ{`¬ËWàB uJ߯fõÏÌ´„¦ß“Òéžš®UŒ"*+#ߟÎïŸÍ{ž^ÿm¼þUóýk-Ÿ»}D®»–›Ïñ_¿ã^…`ŠÕ§‚—~ï¹ÿ«¹ ¥‚/Xfæa×Ís·§¾Õúýh¬ó½®0®Ýd»“Ã]ÁöÔB°Ë#u»Ïάålì¡T¾D!˜G@§Ì0Î¸È _ËcÙçâíÞ3‚çÞ?Wfýbw½f–ôû*…à™³ÊÜÏl÷èç\×4_Kc‡sç™sH#Œ$ºIüÖg^šIìF7·‚3ÁÞuúLá·+ø¬üútn»ó{wzý»ñúguÌ…à§±p´…Êì˜Ùkw×iâf×5³ãb·F0ktoù>½.ñR!Øÿ'n)óâR§ÏÞ†%禂w.»¥}fÇ2‹×»álŸYŒÕýoÍ¢e­a^Ïq}>ý(\1ëdpάBpcÝ(b·×áQ!˜v&нMÄìÆ˜ïÔ-ýSüýÍ_y!(Þ^°ÌM’.¦rýÎf(³ólÿüœMËL{b5]@se]ÃY <×÷µß3t}m÷÷˹uŒÎ›'·‚3fÒ¨f¦ý?§ÏW!HL_ kèL¦æÞ~kŸÁ™œþúáf)ºAE ¦¼Ÿ²·ˆÈÏt‘˜Ÿ™›µg¦ kû:ÙÝF×g%Qìø»Ä|>~—ß›ï¾ÕÌâ3…aÖIþþ×ߟìµÖMv‚ Á§v7]V‡Ü.ör#%]meÖ03å¹¾¶8™ëùÖ |~Çèì˜Û [Ög%®S¸ÍfR³œÛ±äýÄÒîñÐH§âÄ>£ã?qÝO(‰é ‚)ˆú.~ ³•€Íä4²eaï¸ÛL½“ÜÝúÂ~ük'ç0×I;¦9›…àœYú{Ì ¾§|ïÙ²_!¨|J!8¯›uÓe=.Úñ”b­gãú:?Š›¾é²fµ{Æ|'ûÎGQ{¦¼¥ÌM›£Bpî?:›»ä|fºûù.‚‚€Äô`¤ØItËX3nYGԯϭ%~øåÃGïÏ.¢I^óZ¯wJ2›;ÿ9ö\˜¹&1Ég’ÝݶyÜ-ïõŒDÊ£y„t_?âÖId—ÏéBp%èó{ä˜Ì°ìö,›ó¹†Rbúzã-ÅÛ­ñ–Y¶Ü`ÉõÙ×ê\K›G®û¸ùˆhfg!•5¼‰£æK‰§¼?‹»Äßê‚»Ûç°gÓ‰ÜðÈwYç×…ì*"{ÑùØéºÁ”ß1¿GO¬öãÙ=K9ÿw‰7àáÓÿËH’˜±kq4’¤æ˜9“q®©M~þÚŽ£ó¸KŸ³~î93|Sñvï‘b)×è­7 rýç¸]±u.>wÍŸ®9îÒç¬ïqmü‹7@bj¼š!1o†x$¦’5‰)âÍo€ÄÔ˜"Þ ñHL ‰)âÍo€ÄÔ˜"Þ ñHL ‰)âÍo€ÄÔ˜"ÞÄ›x$¦†ÄT¼â @bjHLÅ›!Þ$¦†ÄT¼â à‰éß|û_rj<ÎøüË·Sñfˆ7àAýtJNŒÇï„x3Äð˜ÉéÏÆCŽ]þâÍoðPþ{–MõTY™IEND®B`‚mistral-6.0.0/doc/source/img/Mistral_workbook_namespacing.png0000666000175100017510000004514513245513261024455 0ustar zuulzuul00000000000000‰PNG  IHDRÄÉAý¦J,IDATxÚí½}°–Õ™î¹ÿàªä4[Ä#5rÊq,ÇãqlKëè Öàh¡âÞ°sðˆu°BrHiL ¥iMM´£ –šC Òš–ªX ZzŒ1‰hÛØ ”ÆÏR˜Þ±±›™ñd0>³ï½],ž÷coö¼¿«ê*öû¼ÏÇú¸ïûZë^ëyéèB!„B!„B!„B!„B!„B!„Ç*þ¨»÷´Î®Í³Žïê]Ø9Ë")¥lKöÇÁÎyÛ:U…v¬ ŽïÞ2»ßV÷ÿ»·Ÿ…”RÊnœÒµåúõÙ-SŒcý# ¹ýýZÚùÿËç¶üþüÕbñwv+~ðVqëÚw¤”²mø•ÕoŸ[¹«˜uï‹iÿqë@lœ2¿÷÷Sæoþ®³Æc 'vošÖÙÝûóèèÿõ¿lÿÃ7ôNñÒ®÷‹ýû÷K)¥ìçïþ韋GþvoÑuÛkÂxâ‚Þ}ýÂx•Jr,Ì »7Óß©»éØÿé?oûhÕ㻋}ü‹Æ/¥” ø«}ٳƩ z¿Ï’“ªr”bòU[ÏëïÈýtæå7½úÑ[{öièRJ9®yrÏ@*õÄž-OŸØ½c’êr¦IcÓ ëƒÎ ¥”rè³ÅÓ¯{©6SÜòßœ)};IŸ§ófÿùk)†RJyddÏEˆ"›mš£eÝpþ–EtÚ¿]òÒØûÆ,¥”ÃÀ¿{¹¯8±§·ÅÉ][/QmÆ9f\»abg½M‡ýõӿш¥”rù—óS§=½»LŽsLéÞ¼˜Îúßþì•4^)¥^²õÇK·—¢È»ÝªÎ8F'­§£Ø¥ñJ)åðó¾ÇvÇëëUñ.ýŽríPJ)G†¼ÂVÛqúÿš6¿³Ã™tÒÿñ•_¬ÑJ)åÈñ¼ÿëåƒiÓîMç¨>ãxw)¿Ë§ÁJ)åȱç›;]G×3Äù[¾JÝôÀ[¬”RŽ ™x”‚Ø?Q}Æã ±«÷:håOÞÕ`¥”rÉÿ’¡ ‚øWO¹ÃTJ)G’ü×Q ¢‚(¥” ¢‚¨ J)¥TD)Ç{öì)ž|òÉâ7ÞhxçôööŽIŸyæ™â…^°¿D¡ J9rDèúͼXµjUÃó8gáÂ…cRÆSN9¥¸è¢‹ì/Q(ˆR*ˆ ¢‚(D)DQA ¢y1¸ñÆ˿׮][,Z´¨ úkÖ¬)íÛ·¯¸ûî»Ëc|·nݺkï¹çžâæ›o®¼/÷â¾\ßJ9ž{î¹òüíÛ·rœçq|ç·ðÁ‹[o½uàs___YÊ ¹&_Œº¾÷Þ{ÅòåËËú¬_¿¾R)÷í·ß^žë†!ˆ|^¶lYù÷ 7ÜP¹öÈõ”qñâÅåyœ_o 㔇ó8Ÿëòv«DÊNùøW[V…‚(Ôô'N,Î?ÿüâ„N(ì/¾øâbòäÉÅ…^X~Ïq6×ÄùŒ äbpòÉ'jFƒ(p¯çà¹çž[GìÒãÜÿòË//ÿF  Îãü(ëqÇW<üðÇՕëø"@U‚¸dÉ’ò¢˜ÎÏ>ûìò¾§Ÿ~ú!ÏaÓKœ‡àRwÎ?õÔSË¿£Mï¼óÎCêÁç &”mÌyÜ—óø›ûÔD\GÒó¤‚(Dy„‚HàŽYö¤“N¸™ÏœÃL+½çã?ÞR 2'"G€Ïï¼ó΀pÍ™3çÙ$Çî¿ÿþѤ\<7ÎÙ±cGY'Ä*fpi]_|ñÅr6Êy¹ .]º´R¼¢,鬘6@ì¸gÌêh\LiCrÆs©3;]ã\®ËÛ5ħŸ~º¬—b¨ QŽ€ æ3°žžžò8â“?óÌ3ËàŸ ÊÌnHY¦×°¬™•1ë q`&JæÍ›W>#ÎC8ŽÐ13ão®ÍïGº5°¨k.Ô© ÖÃDfpùñ(BGc¦ŸÇ€#êÃç®®®òsž&ŽvMÛ"Q1T…‚(GXó´')TŽçA— œ "³4΋Ô$ç°‡²ù$f–¤£ ¤(*ŽÇb3IÖ7ù.]ÛLgeéì2êš¯å… Fº’Ùq•ØDе^¹YÓ &Ý\UÇt@Á¬’YqÕy!²`"çÒ¶iIQ(ˆr”1??Ę…è°§ê~­”#?D`3 3Á˜µÅß±¡¦^ùS‹TcœKš´JC<ëÍ8«Ö8#…ߎªÎ‹ö‹Ùn££1ЈzÅ)R´´Mšf• ¢På b¤HE4ëdõf=­4"÷õÉØAÉìõ̘-Fš‘To:CÍ&MQ6ĘÕ]z饕mÒl†ˆ`Çú&ŽªêG=¢ýø»^[Q–|†Èì’AC´ÁX½"DQATëbéKÖ½ê¥ [aÌ0™r¯H]²¶ÇìQL×ñB€ªft¤«f“õ1ÖùžgQÏ4u»LóçÄn[Ò¥¬¥Ö[CDÄSŽuÚª5D®OëŸÏ&c§lUªX*ˆBA”c$ˆ‘ÞŒW ªü`~W!Hw³¦dªÖçB<âuØýknù.Óf‚˜®M¦BÏOßäZž‘ bžïF¥^¤;Ó]¦‘jå8;jóõÃz»LSÑ6uª QŽ3AŒû¤¯M •ñzBºÇL Ñ‹™Xz>‚G:16Æ„@²V—¾´>A$5É}Ò¶‰×P"6ÄÄ;’üÞ“²FÚ5^%áʄЦώY5΋µBÚ Ñ{ˆ©h›:U…‚(‡ì¸Dò_[aöRõ!SµÃ1Ò¦ñnà‘4(ÏÎ…‹çÖ{·‘T%éVft̬‹|æuÍwRwŽç3[žÏñ˜Ññ7÷à¼ø¥Ö0ë½þ@[Å/Ú0ë«7sæ8ßs©áô}Ê´îù/Ò ÚÔ™r¥¯½HQ(ˆr ‰1ËñÝ8)DQ¶ãÿdVˆ2»±]¤TDÙvŒ_^‰µ»tcH¤ãûf¬Z—”RA ¢W|ÿûß/~øÃÊ6äÊ•+‹+®¸¢´…åË—}}}c:+ŒAÚ5×\£]j›£b› b "††Í™3§Ø°aC!Ä»ï¾[tww—v±lÙ²1›)~á _(ËðñÅG}dLjQ±M±Mñ7¿ùÍÀˆë…^ÐÛDeà¹ãŽ;F] ·mÛV>û3ŸùLùYˆÑ²M±Mqݺu¥Q­X±B/‡áW¿úUi—]vY9xMA|衇Êg“&b4mSAlSA\½zuiTó7£‡‰Jüå_þei#lºMA Û|ä‘Gì1ª¶© ¶¹ þìg?Ó»D%b™-ïc!ˆÚ¦mÛTD½K(ˆBÛTDƒŽP…¶© *ˆ¡ mSAT :BAÚ¦‚¨ t„‚(´MQA4èQh› ¢‚hÐ ¢Ð6DÑ #D¡m*ˆ ¢AG(ˆBÛTDƒŽP…¶© *ˆ¡ mSAT :BAÚ¦‚¨ t„‚(´MQA4èQh› ¢‚hÐ ¢Ð6DÑ #D¡m*ˆ ¢AG(ˆBÛTDƒŽP…¶© *ˆ¡ mSAT :BAÚ¦‚¨ t`ß¾}ņ Š·ß~»é¹¯¿þzqÇw_ÿú׋‡~¸Ø¶m[±qãÆ1-ûƒ>¨ ã¶¹wïÞ¦ç¾òÊ+Åm·ÝVÚæO‚o`—›7oV…‚x4€Óß”e iæè“&M*Ï…Ÿÿüç‹Y³f3f̳²wuu•eQmÛl6èA 'L˜0`›_úÒ—J»MÛ¤o®¾úê2Ï9çœâµ×^S…‚x, "3CÎ[µjUùùÀc&ˆ<ûÚk¯: b{ "¶ËyÌÌÂ>F[ R†¯}íkåÌöÃ?,Ëà òŒ3Î(?+ˆBAúhyéþ "MÉçz‚ÈH˜ûøžóyv€rUÕiíÚµåqÒ_)¸/@¦OŸ>Pqdq¤ýÅú_n›ù½¢/9·§§§<¯Ä*Û<í´Ó³Í*A¤ýÈt„mò¬T¨~ñ‹_”׳6žûÇY6H±råÊâ‹_üb)Ä|_%Üøp+¢® ± §AäRà´'¤ 4A™#Ô9s攎3'6©rN¤C„SA$àÄÚ\\ÂDZ .¸ ü\%ˆÔ㬳Î*Ï£|Ü?>G$xÄ:OŠóÎ;¯#œ¹mb3a›M.ˆõl3ì=µÍ\[R—©m†MGÝð¥H{¦ˆëR¢,ezpm>ØU…‚Ø3gÎ<Ä# 3qâÄÒÑá´áœ&ÎIg„\Ë5œ£Ý:87Á‡FÀRA¬ 8ƒI™†@1{HBŸg¶I9òõ ¡™3Ç¡WAA µ¿¸†5´tF¶Éµ1» Û–cý-µM1l“ëRûJí QÊ4ìzݺu‡Ø&B¦4l¦þ‚õ<›—%»\¹Žz¹©F(ˆƒ@lZ‰i8hÌâp*Êîºë®òøîݻˀR5³ŒÑy:3‹ “ŽÌó5ÄxV.†­"åC˜c„ž‚ —~Ç<ꦕx>çÅ 0žIpTÇÎ6‡Ò_ˆAÕÌ2Ò°|÷å/ùëò´bØ&b˜ Z«‚ˆýÕ[ãcp˜~Gy¹×Dº”ïy3ÂYê~™#27 Üe*ÄA"f~!X¤Z­FPxê©§Dˆ{¬ã4ZŸÀC„"èä©›¸¤¸¦êÄf‚³ƒ<½›¦Ø" D°ŒrÇ%kIi©3s®qt};˜þâ\¾K×Ó™Y¤.SÛÊßÌmû QŒ 6Û8ÆL6?7v¬²ÎHb³]®IgÇy¹ñ#Ê›¯å+ˆBAl k|ápÌî}3eäiÔpì:õ'oU¹/3þ­šå5ÄfA'ÖClŒ!ØÄßÌÒú1rç|fÄ âØÛæ`û+2õÖÎø.·Í| –Ú&­Xª Æ+C–+bm0².üÖz±¹¦^*?f”´ÑhÏ Dñ˜D‚L¤XÒQ*#QFä1JuAÒGUiÐØíÉw‘ j&ˆ‘¾ŠMù=› b¤o«Rd€Í5‰gêÔ©uplŒ °p<ßÙ§ Žm¶¿"{QµÆF=ø.Öäš bl܉þ΀ͱYö‚hºFÊFl52á3Ø0þÄ’BšBN7|ÅZh+¿®£ ±"„ƒÇ:FŒ¶§pÈzk#‘ΊsÍ1fvÔŸgä©ÓV6Õ0ù¯ƒbDÍ:P€€š®CÅZ Ïá3ƒP#(ˆ£‡´¿Ò5ízýUÕç˺f‚ˆ? ^yê´™ 28Äž«ì)ÖÚÓÝ¢1ðdÀuaÏ öb"OåÇ€€v¨·®¨ qÀñ•|gZŒp«f` øÇp\Îý©¶*ˆé†ž4uÚŠ †pS¦s#ˆ:¥ë)¤òm”§jû»‚8v¶9”þŠØEØ&³.l;Ïm³™ ¦Y‘4uÚÊ.ÓH¹Æ2DØ&3Cꔾ{ÄÒ K:(K7æp>$³Bys6Êt(ˆBA¬ƒø=Ä<½Ã¨¸JÐ(o¼[HÀ"Õ/¯§4ALƒL¤N[ý¥šØ¥Gpˆ÷¼({¤SÄ–÷üGâ7S›ý8²‚8ºlQ^%l 6ÇD:²Ñ`«ž ¦B›nòi&ˆéë?Øfø åªÚøïD¦kرfšî±ó»}1_(ˆC© ª_øgËñ< Àà˜FÑùo',¸>fnzÿÛE ×§÷­÷¿]0f=…²ËŸ™—)­‚g¶ò"3Ai4_xnwAj¥¶Iö¡žmæÇëýoñŽbØfØAz}½ÿ킵îÁØfþ=÷Ìëç6âh®)*ˆ ¢¿)D!DQA ¢ ¢‚¨ QQAT…‚(„‚(D¡ ¡ Q(ˆB(ˆBA ¢ D¡ QÛ ¢P…‚¨m Q(ˆBAÔ6…‚(D¡ j›BA ¢PµM¡ Q(ˆÚ¦P…‚(DmS(ˆBA ¢¶)D¡ QÛ ¢P…‚¨m Q(ˆBAÔ6…‚(D¡ j›BA ¢PµM¡ Q(ˆÚ¦PÅxÄG}T÷Þ{ÇTµM1Ú¶© ¶© þò—¿, jùòåz—8 {öì)®¸âŠÒFž}öÙQÄõëחϽûî»í1ª¶© ¶© öõõ³gÏ.ŠÑÖïÿ{=M”À>V¬XQÚÆ‚ Š÷ßTqÇŽå³ z¯¾úª"FÍ6Ä6D¸iÓ¦â²Ë.+«§§§FÒT¬ÝÈöã#Õ6E#¼úê«#b› b âúõëFßQ…›o¾¹´ÖïFSµM1¶© ¶± ®^½º4(ÒBT46Âf†ÑDmSŒ…m*ˆ bñ³ŸýLï•@ÆRµM1š¶© *ˆ¡ mSAT :BAÚ¦‚¨ t„‚(´MQA4èQh› ¢‚hÐ ¢Ð6DÑ #D¡m*ˆ ¢AG(ˆBÛTDƒŽP…¶© *ˆ¡ mSAT :BAÚ¦‚¨ t„‚(´MQA4èQh› ¢‚hÐ ¢Ð6DÑ #D¡m*ˆ ¢AG(ˆBÛTDƒŽP…¶© *ˆ¡ mSAT :BAÚ¦‚¨ t„‚(´MQA4èQh› ¢‚hÐ ¢Ð6DÑ 38ìÝ»÷Ï_þò—‹;î¸cT˰fÍšâK_ú’‚¨m‚Ý»w‰m¾öÚkå³.¿üòbΜ9Å׿þõÃüDA â1†•+WgœqÆ!Çf̘QÌš5kÔÊðüóÏ'NÕg*ˆãwÝuWqÚi§ºm®[·®˜0aBÑÙÙYtuu•¢Ÿ7oÞ¬ ñXÁ… “bãÆÅ¶mÛFåù?üp1iÒ¤¢¿kDmsÌmsß¾}¥=2Hìëë8þÊ+¯”ÇÏ:ë,Q(ˆã5ÕùöÛoâ¸UøðÃËópöV‚N+Ï6l(ƒYœ‹Ø­Zµêûq)(Öƒ`£¦U‡ô#ý™ÛæyçWÎìR FóæÍ+m/µÍ×_½Ò6±ùz¶‰ˆ¥¶ÉL[Nm“ë¹¶Áb›¬_§åzôÑG+× ±w®©7ÈS…‚8Ê àà”…'žx¢–Ûn»­  ³Áy$*làãè8,£mÀÎ8Fܹp"p‘2m%è0«ã¾”ªu¾#õ• bè„g FA<¶m3_ äsn›¤,²ªµîH™¶b›dH¸/»Qs°Î³ÕT¹¦Ê6«ì;ÄlçŒ÷ׂD±-ƒ…Q7)!FÇézMG+š¹ªM1+ˆwqn”AAl?ÛDpÂ6éÓz¶‰(—mæ"H_å A¬g›õ|, ð}¬¯+ˆBAGˆTBÄÈ•4$›âøXb¤ÉÄöÄØ…¶‰ÈÔ³Íñ.ˆdQl²þÉúæÑQAl« C:)^SÈS¡cêðŒÐ«R¦|&…E ©”©‚Ø~‚Ȧ”ØÈ’Û\.ˆl€a³LÕRÀ`ls()ÓV1dÝÿ©'¶ ¢PÇAªç]±bÅak/¤-S‡Ý¨±© ßDÛØ .ùι|S Á‹cùÚelŽˆ´‚ؾ‚ïïå§({¤õÃ.bðÂ솯F›jð‘tL«‚åE óטD¡ Ž#0#ÃQ(„-Þ™ŠM iú(^»à|FÐñÚE.nÌîbÃ@l†©÷ÚElmç^ŒúI'‘.W=ÄöDfd¤(™Yaƒô9¶‚ÍÄ/ñFl´‰×.˜¥¯1° {ŠŸLÃ6îs;‰©a›õ^ jUc ‰(b¯UL7¥)ˆBAãM é‹Ïñ2)+‚³¶/ç/?dÒ‹yM#ÞÅŠ÷¶ŽôÅüü%l‚eú⪞© ]¶Éà,l)^zg¦ÆïÒÿéZ6¯2±Õô…xÄ{¤iù*;aÀ–û"Zõb~=ÛŒŒ )ÖX¯Çü ¢PÇÁˆ¼ÕŸ©jôóXƒEüªw‰J|ï{ßSAÔ6E=Ü{ï½ ¢‚8|üå/YÔòåËõ.q>ú裢»»»´‘Ÿþô§£*ˆÚ¦h„={öW\qEi#Ï>û¬‚¨ 9ûúúŠÙ³g—FÅhë÷¿ÿ½ž&°aÆÒ6<ï¾ûî¨ ¢¶)êûX±bEi ,(Þÿ}QAnÚ´©¸ì²ËJãêéé)ƒi*Önd{ò‘G)n»í¶»x衇FU µMYÏ.W®\YÌ;·´‰+¯¼²Ø¹sç°Úœ‚Øæ‚7nÜX\sÍ5¥‘IDŒXËûàƒÆDµMY7Þxc±cÇŽa·7QA<$ø°s‹ (Û›O<ñD±k×®1BmSVqݺuÃ>+TD)¥” ¢‚(¥”RAT¥”R*ˆ ¢”RJQA”RJ© *ˆRJ)DQJ)¥‚¨ J)¥T!A¼ï±Ý¬”RŽ Wüà-q<ãøîÍ7ÑA·üõÛ¬”RŽ gW)ˆÇwõ.T}Æã ±¤B}nå. VJ)GÿáÏ_=8CìÚ âà¬>Áƒ›g¶¼Sþo¾^q ¿§8¿wiím¤/¥”²÷¿÷ÿ»~J÷æÅL$T6¿}Ê‚ü_26ßÄÿ§(¥”íÆãçoùjí š© !„B!„B!„B!„B!„B!„_˜ÕÏ[úéoD !„hk †E?ÙB!DQ!„‚¨ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ã[Õž?k ž=«ÅºOèçW‡ñ¹3jÏ…?orî¶Úy’cüýöŸ»p˜ÛðZù„BAlA\?ÌA?ÄýœZç¼Ó’óRAüQ?ŸÂs?¬ ˜‚(„ âqà âëµ×9ï¦~öõsw&ˆCE¡ !Äð "çͬ8þé~^•}†¤Ôžs}M ZÄÎÚw\Gºò‚:器ÏÙµóâgÕ9wfMdà9- "uz-9/¯û%µûñì…µò´*ˆÔÄ®^Ú”ç®î8˜ÝР­Áô~.«Ó^Ó’v~¾ö÷´äø´Z=n©µe`RÒw·Ôî?CAB(ˆgùºŸ7ÖðþÚßüûaÌ«‘ ¼¯vüíäïg‚s^MPŠÚ¿ñ7\š•ï¡Úñ}Éy?n¡îo'÷Lë>µ&dù=w7ï*AünGuÚôœÚ9—TbÞÖskíúaÖ^?ÊfÂ)g%Ç¿•﫵ñ%Yìíø$Å;WAB(ˆƒDÎÝ\›„ˆl«îÓêâi5á$xFíØ„¤œÿ5yƶZÐNg„gÔ‚÷þäØÒäÚ‰É,kÇÐS¦×Ž-«•¯£&„{k¢ÒÙ¢ Îì¨N›~£&®-b_­-:“öz Ôz}7+¹ëkŸg×ÚhoíÓ“ó/¨õÝ+ ¢BA¼ ž“wAíø7êâ·+yàùd6E:ïÞ~.©8/fƒÓáÜW*:CÄØìòPŹ jß]ߢ vtT§M_¯Í[Ä¢&Ð)¨ûÂLÐê bþìéµÁÂ:mq@AB(ˆƒĽuî±? ¹ nÌfw)–ÕΟP)f}_íød³ÊŒšp¦éÃç QÖ>W½ÂÏ[?AÌÓ¦Q®™- b”5ÇoÕÚrB }7+œTabmÖë´{³¶P… b ‚XïÕ€·k¢U%ˆowÔÇnQVVfŸ?­‰I¬Åõ!ˆ3”yÆñ–ŽÆ;có×$š bàâD _ÏÚ«‘ "¤«;>Iµ¿¿]èf‚xKEgÕú/]w|MAB(ˆÍq[… ¾^çû’Ÿ "÷é«s]¬²©cZí>~vy^’̰HqjGýôæCÄ%I9ªfTEM¨[ÄŽZ[Ŭyw6kk&ˆé³gÕ®AÁ·‡ ˆgÕˆßõµ™jg¶P… b‚ 5qÊñ@Çá›KÎÊu.ˆ`Ϩxöjß–\·¬â¼Ÿ'‚˜ÏHS,¢ Æ:èw+ν¤ã“›ƒÄot|²ƒ3_{m$ˆ3jm™ÏV'ÕúdÛñ¦‚ÿº‚(„PbmÓY@«Öµþk6ƒ‰ç9uqf2Ú” ÍDBÌnÊÊwUÇ')¾YÓ.Ó’àÞ¬î±IgR6#Þßqè{‰´ÉæZ9Ϥ Æk´í+ÙùqzÒ.²Ù/Çœ âúñ«Õk¤×'m« !Ú^ã|ÒiìlŒu¥z=öÖüŸ¤ñÒY]Õ{ˆ_M®}¨6ã;P»þ´D|ö&Aÿ[µóö×Ä´èøä}À Iy6Öî¹·&^­Ô=꼯ã“ÔëÉó¯Í^ãó’&÷›Qg¦ýz‘o–2ýF2 y¨ÖûjÌgš!ºW5ÄŸ¼3ú@mJ»õ%³ïi ¢¢Ý1fgëkùÛµYÊÜLè6$³´{kŸè8ü—^Ω•aFvœóV×®{¼vïIÙ9ÓkÏßPÁoÔõŒÚ=ÓäÅ¥µ{ý¼&ºSkçÓ¤¾“j"õ@&SkÇZ+Ãw[¸Wˆù-‡§$c'çôìø²¬už*ž´WÚ)N«õõøtÒN³*ÊxFmv÷»©Vß³²6›ÛQ½)G!ÚB[Ápÿþ§B¡ !„ ¢Bq â§;Æî¿”B!Æ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !„ ¢B¡ !Ž2,«1ýüvæÜ«’s¿}”û£&ç^pŸû|“s§eç¦x{ÎÝWľ.ÃóÉyÓ±s”œ{Á1vî·“s¯:ÊÎÎø?ÞÎB8CB!D!„BAB!D!„BAB!D!„BAB!D!„BAB!D!„BAB!D!„BAB!D!„BAB!D!„BAB!D!„BAB!D!„BAB!Æ®ª \_ÄÕÉ18×fBq¬ã¡š6âwm&!„Ç:æ6Ãýœa3 !„8Ö1±Ÿûâm"!„킪´éGµ/°y„B´ "múq&Šm!„í„ziÓ«l!„í†5<>5*˜Üµõ’Z¡‹Ó¯{©èùæÎbÅÞ*n]ûŽ”mÁ?[õfqæ’í©??ãÚ õ))‡Í§¾5îÅðÄîMÓú ºŸÿéýo¿û§.öïß/e[ò™­ÿXüÏ‹_ú¸tà®ÍéSRŸÞüÛâ_vë Žûôéñó{—â¸ÿæš­øÍou\)ÿv{ßÀ¨vÊg~5}¨>õ?.ÚZèSRî/ݵ«ô'Öǵ vvõÞBA™âÚqRä5ßþû>8Kì]¨OIydüÊê7 âü-‹Ž Aü«§öØqRÖˆ˜ Õõ))‡ÏŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥TmK)D)DÛRJQJQŸ’RA”RAÔ§¤T¥Tõ))D)3îÛ·¯Ø±cGñÆo(ˆRß{ï½Ò§øWATG… .,.ºè¢!?œìï¦bÕªUã®N9å”âÆo¬û=NË9”ÿ¸ãŽ+Ïåóh ñ¹çž;ävSGߎGËÎyÏoíóä“O–åÂo•}âĉåy‹/.ý‰X4’~~Ë3O>ùäbùòååqQAT!ˆ7ß|sé¼8zooï¨ "#ç9sæQ»)ˆ âxÄÓO?½¸øâ‹K"ë2Ò‚¸dÉ’r0{ÿý÷—Ïä_>#Æ ¢‚¨ Bs A|üñÇË #ZQA<–1÷¹‘D– ho½õÖC}£ÿó„ •²=æ£Zºtiù/BÃ]~ùåå¨åöÛo/§ÖŒ$æÍ›7Ðp===‡P¦Þ—^ziñðÃ7}æ‹/¾XŠ#•8öàƒvlÍš5åsãóÝwß]œy晥±zê©åÌ%òsýºuëÊs&Ož\–%¼§Ÿ~ºü|Ï=÷ â…^X¦¸rìwÞ9¤ÌœŸ>›ú§ÏæoÊÃwœÃ¹”·Q ¸á†ʶ޹sçamDÝiÏôeÌ-[¶¬|n>}IŸaè¤×®];pî /¼PÖ±9á„ÊþŽthÚŸ”›Ñ+mÅýÂ.¸–òç‚é˜zug–Lj4MÁr¯ôØöíÛËcüËgîµhÑ¢bÏž=G vuu•íŽá;´5íCÿðÇN:餌ºÓÞé}ÂViƒfϤ½ñËôí—YŽE»ãã|¦|ô/~€mäöB](3ýZÕØqÌvÂÎ ¶Ø*÷¥/ï¼óÎCÊų±ÕóÏ?¿X¿~ýa©züŸ û{î¹çê "õÁŸ˜õT¥y1- î™–-ŽÅsø—zPÊAyÂFÓø‰RFú7Äèw}͹øWÍÓºólÊ–)õ•(ÇR¦Ÿ± žÏ÷¹@c”s0ûŽyAÄQip‚'B æ3FŒ‘c¤4F‹á‡pN__ßÀ}"·*¸Wå³¹>5N ™ëSãäùqˆeàšgžy¦,÷ˆïà 4ÃHqÐT 0\Ãwé ‘ëpH ™6à|NS†é³#Ý žÉ1ÊÅ9œ›¶Y.ˆ”óó äœN@›á#PqŒöçôßÐ(;ƒ ®ÇI9ŸïÓ‘+}K{’FáºTiÊBʳh#îËõ8U.ˆøiÊ›×v¢ly K*t‚c܆cÆ1‚HÛÄÀ ¡¯©3^Úv¤M8"BŸä‚ÆçV³ØmêAú xüËgÊ„íPȵôååšŰêC¹±§zvœ¼øžûp v3‘ðwž¸Rg®ãü‡ëb@`bsÄ%b ~ÂÀŠû„O¤‚Hݨ vV/ÀSVžŸ‰mÑ/éÀ±¢ý¸S7bϧ]ˆ ”+žñ“úð7vœ "÷¡L´!}MÝùž{ÐÞüͱTÓºSFžMxuç|ʪˆ•<3dòÜ4Öåäû´î b­CiÈt´£ØtdŠcÆì„ŽæŒ'½†Niõ¹8{œ‰`ô©Cã çSþáDÙù;ŸA… V‰aÌÓºbø8"Ý [8NˆÏÏ1MIÄl3I31 2Ó‹ÙAƒk0à4à,ÜŸ²G@ä¼|¶Â}Ò—·cb.†­¤LqòªºsíFÙä45C™Î>ûìòûT¥ƒŸáLÁ• ¦*ì›úSï83­¬ÐÏé “¶âs«õŽ çc!Ìá§ôŸ)Oî;©ŸGÙÃ^¢Œy pUvÌ÷¹¸3k ;ä~éÀ.G€ŽuëTÜ(7v>‚H[†6Jÿå37üû¥ƒÊM9ÂNiÃt¶É³hCÄ>ŸéD žCÛÏ([.ÒR¦1áH³T”Ѳqßø;úžº„qmİzƒƒtð£ &‹ÎE"_#ÊSüŠ’;N#rnÄ+ <Ê‚G€ˆ‘#¶<í’:,ç©\Êi_î; ßG=Rr>NIÙª„†ûpœa8xnô”7õÆh”™5k#"Úœ²8å%ÈÄ MgXAÄ-6•sçŒrU j b”Òkvé "ZÊIÿrŒ²„ý0ø9V1OUV­Ó¦õ¢R[ŽLÍ`Öw°‹ˆØeˆ"ÇHÁ…Pñ4¿ƒ"ÊAN{^nć«1ù`9Í !F©8V pñkÊfÒeÚ%µ½(K³&uÂöbf…ÍsîååûÜÑgU3,Ú9ÊFŸæu‰v‹rUÅÅF‚ˆæû¨{ÄÇÈRQ'âcÔ‹™e´ ÇòÔ1Ÿ‰=U_±Î‰V‘ÙOtNšBme¡—ëéLÊë–1räy±~†_µH•z‚ŽKs!âûªÅì0ØF äñ¼z‹û1“NË©4êžsÆ 0Ö5‘ã¸á”1Öêªú,7DžCÛðý`fˆÍê''Rç1á^±FUÏ~ŽFAÌí°™ B‚`Œúi¯|–ÕŒ1Œ`OÛ#püéõè[|«*ý‰_׳ýxý†þÄnòÁbÕÌ$½¶Z•ªKýœ²U•/fd¹ bK|³×hS-b ÿò™ºÄà7ff±ö[µÙZ½øu¥Žô'‚™ "õ®êû´î1àY%¡ð9²6|ÎïA‰õÐzƒOqˆ‚)œ°Qº«¹ŽÆi@Œˆ”Æ#«XOË/f`áäõ1R@ˆG.|ŸÏŠ1û#gÏß¹…CQÆHWæ3Ø(w‹ˆgEŸQ®¡œaG<“`†í¤›§ÚUc֜ήóܰHçó9îï½ÅŒ“ÁOÕ,-RÝvK†o¤ëPé`†ïÓõ­<“ÀZdU6ƒçp/ì©z} q>u¢ž´]£ÝÒiæ ÿ Q&ö°¶Hª2•"zU”.5Dâ~ƒàV®ª{¼¯˜®RfâH´w ’©cºÁ1Ö2i»fqq‚˜Š †Ø,]Qog\ÌN¢“H:-ÖÆB0ªt &ÎÅyÓ5º*ALËMyyf¾ç©ÓS.žÏ1†òäiÕØäƒ ñ|î‘§W(/åŽ`‘À*æ>yð!0à¬iZˆE÷\|Ó@Âç5Ä(A4vÓµ"ˆQ÷Դ{¤ Bñ,ÊAP—FéövİsW•XµB®%Ç Ò¾Ü/fŸi 3ÝÙ‰OPΰõF‚åf6Bÿdž›4¥ƒ³Øðb3±4­Ê³)s Nc"µUl36ÍUe'‡<óBÙÓÁ]¤çyVd‹bœ6J7úQ'ú#] ¿|#AŒv‹r¦ƒêF‚©ã´á—é{ƒ .c©#¤´ m”~b}–ãé.~q˜1ݵ8”Æ)~ŒøÒÝrù¬ã"#3Ê;ÎRhE1ˆH¦u_nˆ4Plf‰àÁ1îÃý öé(95bÊÅ9”“kÒQZCTÃð#½[5²N±k1-c80÷Ã)(‰Å®¼V1}f̼›½‡uGDù޾ËëBžÎc3@:øigAŒYÇóM\­›àútpöšo|¢¿h|ç\ñÅ®­bº ;ú5>á/!3–4ÀDzõåŠgG–ÅŽb#å£ aT.ˆ±s6ͼð}>;‹™©}òì<Ór¿ˆ !Ô'z­b<3M6Ä@ð,êγc?çòôù‘H?1¡-£ S¶òf@Û"—¿#„¡T« J±ùd¨#Fdéì€QM¬ÝåçâP8/¢Ã¿ùԿ꺪rèã½0¾‡Œ” eU~GÅ!âÙyú6fiQ>ÎÍS¨<3-3NEÛ…ðP޵ðο·=¥ìç/Ý^ â”îÞãZOìÞ4­¿ ÑJ¹¿Ø÷Á¿_¸÷õÒyû¹ïÄî“ô))‡Î¿{¹/üé@g϶ãÇwõ.¬¸˜yý¯‹¯¬~³tdÖ@¤l®üɻş­zs`$[:o׿Yú””C÷©?½ÿâä«{‹Z¶å»G j¼7œXÊ6æ¶É][/Ÿšº ÷mOÙî윿eõP²-c‹Y&L™ßûéã»7ßÄÆ)Û‰Çwm^Òï¼3õ))‡Ñ§l>«C!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B´5f\»a┽tÎß²HÊvâä«¶ž§OI9þ}jä1kÄ)][®?¾{ˇý,¤lKvm~_Ч¤‡>5Z³ÂÎîÞŸSø© z?¾ø†­øÁ[Å­kß‘²-ø§÷¿QœrÍÖñô¾pb÷ŽIÃéSÿçW]èS²Ý|júêý¨&ŒÏ©OŽïî½—BŸñ¹íxzóo‹ýû÷KÙv|i×ûÅé×½TŽj§ÌßòwˆÚ‘úÔ™K¶ú”Ô§JQüÖ¸Ã)ŸùÕôþ‚`û·ÛûìD©‡“ê9Ÿ:±§·Ð§d»sËÎßÓþãV™uöl›1®±³{Ë2œþí;?²ó¤Ü_ŠX¬ nCõ©žoî´=¥ì碻v•þÔÙµyîøÄ®Þ[(èÊŸ¼kÇIYãußÝõÿœ%ö.Ô§¤<2~eõ›qþ–EG… þÕS{ì8)kdSÀPXŸ’røüIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJQJýIA”RA”RR¥T¥ÔŸD)D)õ'QJѶ”RA”RA´-¥T¥Tõ))D)D}JJQJQŸ’RAlC®]»¶xá…†üýpñ7Þ(V­ZU¼÷Þ{£Þûöíø›ºRç‘|Ïxøá‡ÄcÍìx4íüé§Ÿ.Ö¯_?&íÐ××7â1¿]·n]që­·wß}w±cÇQç=2žrÊ)Å7Þ8äO>ùdÑß#jÔU¼çž{Š%K– |¦®Ôy$æ©§žZ,\¸PA<ÙÌŽGÓ᱋.ºhTëÐwuu•¢?’1äwÞ)Î>ûìâ¸ãŽ+ëÈß&L(î¼óNQçU‡Jœ)'žÿÌ3όȳž{î¹R ©§‚¨ ‹‚H½¨_*ˆøÓp×wéÒ¥ÅI'T0ÓÁ,¢8m« &é-˜†¿ÿþûË©ytÇùÌqF,1BÂèãsšBàøž=Í˹}ûöÂroooå±4A¹Ö¬YSÜ~ûíÅã?~H*çò|ŽQ^Òü åæ<Ê â‹/¾X޾¸U¸îÁ,ŸM#}vpçÎ¥£pÊ×,PðLŽ7J/ñ\êBÚ„zU9å¥lœ“–¶cdy饗–Ï¡ª±QÝ¢_ù—ëêµ×⬌žÏ=÷ܶÄ´½h—°ÅHóåíŒ-rMUðåüVžY”¹gÕ±4Èb#ø8™ü-~Äúg4³ãô{Ò™á«Ueæ:ž ù»^½x6åËë’ "íÉó˜5j+ê‰/á/ôM•ÿqÏLËFŸâ'Ôoùò屩ªíÕ-Ú•ò²´PÕF_|ñ!™TŒñ?q„œƒ"`Nž<¹8ÿüó‹N8¡œ¦c0ü}á…–Ÿ­1 Žcé}褉'&”UD4žiP”@jœ§Ÿ~zqà 7 äé¹?3ÊÌß”7žŽˆ“ð/÷B RA$pOAï¹O¤&¨'L…˜gó=çrSÞ4°`ø”‰{ÓfüQG}ò@Â=)cÔ¯Š8 Ïâž<—çsM::%`Òw'Ÿ|ryóüHíPŽ(;í‘§Lyפu;óÌ3Qîžžžò»¨mÄýÒÔlõ|VÚn‚í…Ÿ`[ô>ƨŸ~bÀ΢é>çëP—_~yÉVž‰­ñŒ4 sÏ9sæ"~#@Œ)#e£ŒQ¦Ô¯±l¾æ;ú5·c>sê–Ú9v‚M¥¾´-ZTžÇ³!/[¶ìuçÞÜ£*e˜ "÷ž7o^i£\÷ {Ïgb´å¡M¢l´öNÿñ™¾Å¿ª²L‹/nX7ÎåÞÔ‰xm÷«GD“óê 0ÄaD $f L £ÇÒ4# #½ΈA¶š癌¶â3÷Ç89Ǩ™c1ú¢LZ:£ ñÌpD¡`Æbb‘¦L©kŒŽù…Ñ” ‡æ×E"ÐPŸÔ)¸÷2§$İÙzõK„çGðŠ 38¢l<—¶bô[%N© ¶R·DžžAùãÍÒ´í(ˆ´W؃%Ž¥ýÄ@(€#À¦A3|¤Õ P HèËØì4l0ιùæ›ËÌߨ÷Og¦ j)gø ¶ÂgÎ¥ŒØFjÇÏ 1LíûIgÅ”%ì8’>'ˆiŒáÙ\“Îö˜IqN”9Db¾ÐH £léFœ°oîÍg¾ËS¢ÑĤª”i*ˆQ·´ßªêÆgúcÀžû})žQoí%†Ue§ÄaDD"_Wcô˜Ãa£Ó1ÒÔ0qœ+5ðfä™bþFéhÄ,œ # Ñt\žÚÀˆâx{¾»‘ºhBHò{ð}žšÆø 4üÏ|ÃI8Έ’À“*Ï%`¤ÎÈuÄ$?"éÈ•c´¸gž"âx|#A$ ÀõéL/‚kÔ­^š¦ÑƱ£lÃ|f–âíÇ [I™ˆ>—îflDú0= ¾Øu *9†„è\«fŸÌcV‰­`¿©]‡Ía|—Ûq=›dàÊóÃ>ðõ*»‰ãØWćÜ_ãx"1#Ô6Ú“/0ÈCH#¶qï4¤³Ubç6DÊ_µ®™Ö-1íÛ˜tT¥Î)wdêšÕQAæMÑÁUÇÒŽ†‰óbTƒ¹ rx3)aÅìŠçT파ÑFŽ˜T¤mò¼|í‹ïSãNÓMÜ‹2Q·z£MF¤”?œ=O ‡àÄùQ–\„«HY 8(Ú›ë`´EÜ¿Ñd#Aäo‚n½ºÅTU»6=ñÐÙOUæ©Çˆ!h ÞZ±‘”\Ãà ?ŒÔ!Ml(²01ƒJ¸yJ1|°jGrjÇ<#ïçø>òô^UƒÐxvœ“ÎÚrÛ Á‰ô'>QOLªöLðlâKĆ4»“Þ¿ÕM5il¬ŠyÝ87ϰÕó3ÄÄbP½uVq"ÆÊ¨ ÃÇÒtO+Œ”(âý™Ù1z ÁÈ#8T¥hÓµ—f‚ˆAqA ª^éh>Ñ<¨sšvÊGøœ›‘”1‹ ?áä1käÀ´+ÁŒò‘^cÖJ[Ñι 6Z·m&ˆ1ƒ­ª£|qh‚˜¶W+‚‚F»Å€¬Ùæœ1 ŒT&ˆï¾ÄVÒÁþ”è#cÔH±ã<Åš~Ÿu¢½b½Ñàvᾈ!ƒxÔ”³•^«‚X/s±˜yVµk•P/ú‰6é]» â b¤I#¯?”Ñ  ÐfŒlcq¢³È2¤vNåY ü{¾k$ˆQnl-ͼ¤™“|‰$R´øwUýˆ±'€ÁAžÒeÈà4ßé"ìf™êŸg²(wlš‰˜’Ÿƒð\bA3Al¥n­"Ÿ)G¾çAA§‚΀³Å&Á’Q#7½ž{Æ.½<ÏqÄšŽŽÄ(·AŒÍ?©ó=÷ QAdxVœ0vËEà\ê)]Ê„ÃPö˜­ÅNÚXcÉIl~IgŒóc¶i#4f–©3Å ´l<7Öü¨/ß,©KêŒQ·˜AWÕMA=AÄvbWpUº°bÛÜ#®Çα‡| .ì Û°Åtƒ]+‚È¿iê4¾ÇæÀrï4‡ÅR÷ç¹ñìTHc̿܃z (Ô-6žä¯]ä›nð1Ή{²ÖHYcH{3PHw×òm@y¢lÄü›2 NœÏ½òW·bÊ÷ jâzü:-W+‚;é$KRÖÛx£ ޱ Æ c¨¿žÆ“¦n¸Çbj:úeäÅw±†ÑÆH®UA̘ï þ±µ<¶?§£2+¶Fdz¹&EDâ¥tˆ1§Ï­ €±³,ʯ>äÛ·ãUÒ9qM/bŒX§ÏM_åaŽônîŒõêõWGOc†A¤›©Ã–ôŽx­(_×c€E ûÀRÑlESŸåÚø>² a“ù²BdzÃîxv¾q‹{Äú Ä¿ÒlHÕ{ˆ¤Œ#óe‰Ùm1$ÂIJé&Ûi<`‘Îx9—ã1ÌcL^7ü7­[3A¤œqmó= â8aÌFZy÷p¸ÈèãlåûÛ„ˆZž–ì³arŸáJsÄ 4 ”»^ÙbwÜX´«¿T3ø__Iß'ßä$+/Š÷Ïø5òžÍs=›k)ßpn(iÅG)O£²áoâ^+uóǽAŒ #¥Vß=”RAlþš ³–‘þat)ÄadüÊDÕ{1äÖcër=ŽÄ”_êÀG³OáKñk0ù,«™?Á¡¦X¥Tˆ ¢V•†#E@НGû®¥<Þ}Šõ7ÖÍóu¾ø-Ùflõ~)D)u`}JJQJXŸ’RA”RÖ§¤T¥Ôõ))D)u`}JJQJXŸ’RA”RÖ§¤T¥Ôõ))D)u`}JJQJXŸ’RA”RÖ§¤T¥Ôõ))D)u`}JJQJXŸ’RA”RV¥T¥ÔD)D)u`QJQJ© J© J)D)D;NJQJQç•RA”RAÜRÜ÷Øn;NÊ¿ºæ­?© êSR䊼utâ”îÞtÑ]»ì8)küß—¿òãSæo¾JŸ’òÈØóÍ¥ ßÕ»p|Ï{¶Í  ÿÃÕ[ÿðÒ®÷í<Ùö|zóo‹ô2Cü°s޶ΡúÔÉW÷ú”lwþîŸþ¹8ýº—JA<áª-gtŒwßÝ{/…½àK¯èÀ²­ù·ÛûŠ3>·½L—öó[ú””Ó.Ò½ysÇÑ€»wLê/ð+åLqaïǤzXÿ`S€”íÀ•?y·LëL]ÐûqM 7tÌÚ0áH|ê„ù½;b¦¨OÉvô©?¹åµ¢æO:»6Ïê8Z@j¨³{Ëk…—²]y€M1G"†ú””‡pÿä®­—t8~Á泦tm¹ž  e{qóÜ»7MÓ§¤>ŸÊ:¼B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„B!„ã ÿ?آȌF`¿½IEND®B`‚mistral-6.0.0/doc/source/img/Mistral_dashboard_debug_config.png0000666000175100017510000013656213245513261024701 0ustar zuulzuul00000000000000‰PNG  IHDR¬rOÚøsBITÛáOàtEXtSoftwaremate-screenshotÈ–ðJ IDATxœìÝw\ÇÛðgöî¸BïUQl`ïMìÅKÔ˜nLbº11EMÞôüL¢iŠн›ˆ(vT°  U@z‘ÎÕ÷…〻〣?ߟdwvgæÙïž››Ý%^^Þ€B!„PG´t!„B57L‚B!„P‡Ã×µÁÏÏ·9ã@!„BÈè"#ok-'µçûùùq ”6mL!„B5B*"##knª‘ûùùR ,‹ù/B!„j†RsH˜ggg¯^ñõí‡0B!„jO(¥àìì”™™¥.¬9'X©TéoB*•–••€Db*‰ˆz”!„B¡fdxjʲ ð4Kª¦CøùùÊåJªc"pYYYFFFyy™zB€"Kœœœ%‰ÑŽ!„B!½šBLLøêIÕF‚U*-ÃÀ”Òœœì'Oò)¥ Cªå×R©49ù‘µµ½½=Ž #„B¡&Õ¸Ô”¯e *'LÔ(IMM•Ë¥ £ïŽÂ……O¤Òr7·N˜#„šßqâê·'Ø]ÙôGS- B¡&fÄÔ´Z̲lÍOž„P;ScˆÖÄÜÁÃ×ß÷¯Óÿ¾?ž,kLÎ$rîÛ‰x|÷Q‰ªz/<±›÷P7ï¡£îìùùï¹JC"¤”­ý}˜10f^Ó_~aŒsÕWwŒÄÞcÀU|ø½,iz:ùöä†lÚt&YÆ1²RYÚ·œ6bÌzÔ>]²´ÓÍÖ;B¨3njª/ ¦”årmBˆ‡‡‡\.×ÚD·nÝ¢££¹ÿ”ªšæ=!ÔfU R&üùÁÏ÷¤<‰M'¿©Ïös>ÒítRÓûõ¯žïÎm-k¬·\I–wítðñˆ¬Ê±S‡^Ý„9QñJÖ¤ª—(™‰ÄÚ¹û€ñ³¦öµî7oIR¡¹*¾MŸI³§ìí(Y^ܵSûŽGdËie„Ïoü <óöÙûÎÅ—²Õã)¯µ < ¯qÓFõídÁP–æe$…ï ú·úìdƲÿ¼ecœxrûèÁoÆe–°"+çnÞ]ÉýyJˆ‰ãÀéó¦ ñ°(ò®ŸÜäf¦œªûÒzø|S°›ðúg¸£þôüØO ŒoðÙ®Y¨*JºväïÃ7s•5OWíÞrDDÜiTà‚Iý]M‰º÷òFþå!„Ú㦦:§C¨T*Bn2±H$ª‘hkbF,–¨·*•Jõ05BU>V~YNYVÅÃgª¾>WoÔX…ê£ÄB;Ï1‹Vòò¿Ø—(àÙxõ4(~“«¤TPU‘UÉJrÝ>”\ôê' ºw1Ð>ìT†’˜zÏY½bˆyEk¶žc½ÆËß°/QVã+|±“ïô—ÍÊ6ü|9ŸÕ%âϼõÊHõßÔ¶SoOKޙ͗ežÍÀ‰>&À&ìÝt9Ÿ{%.ÉM¾{1‹¾ ÞZä+ÇÈE«Ìdƒî³T÷áלsPã4êÌð³]³gÑuÄÂ…©ñÿ»ö¤ŽÞtD·‰Ëg °©Ö&ά@i2zjªs$X¥R©+…B©´öGòª«í„B¡B¡ÐïªT‘ûtáÿ~Ô,.¸–$eYQeÅ}·Îj¬Vähñ¿¿û¿{ ó®ã_xsF ßNã“äŒe·Þv²„;éR–­V±â•ŒÍ¿þxAww›NV ›¶¦ 1‡²È¿Ú}3Oìðö‹Ã­ü»ŒO`5:RYö|úÕ•“œ»`wåL&«'<¿i#m Âwü™mâ÷æ†%]«&¶Ýì õÚݼ+yú{' :"bêl 9§¾þæTJCfˆ „Ú=£§¦5ŸWÕ !êž(eËËõ]2M)ËçWìÌ0˜#„ô“ÇŸücë™XÃ/Œ£Êâ¤k—SftébjgÆ ’.¾®lJäc™Îj•¯Fö=@â·èC¿E•;ˆ­$ÕG¨¼ðáùs&-ìbÓÅF™zb8z»@ê™ãå,€L×ÄcnþšöqM§$‡œ{ø¤œœ;—ìÿ¬»Cw{AhŽÞÃ×ÇàÀêBDFÍ{ö)¿Næo&¾[5âˆdiá1rŸ^ö“ßXívþä¡3w²rÁ B¨=3zjªs:! ŸÏçòf–eu 8sX–òùuEü !T¥â!þ·w~º+5õž÷þÊQÖ»Ú *¿‘WQ"ðRªN5¿‹¯øb^^ªÂpÛ„n¾î @êíÄVã›}ÐøgÓgH'ÈKÊ“S*B óðg_ý—]ýÁ@âju ÃãeYJ)è ðy ”)Øš›4_ùÉùàcï:°§Åõð‚šcœÕ§T›  çð¡V_¦ïpjµÉwž°|Î ûÚñRJ0Z{oØ© nþùÊI3&ë×ç©å=ݶ¶åf­“…êÐŒžšVûPO5P€Ïáñx2™LªƒL&ãñxüJ!„ªÑxaKØr,•šxÏ{~‚ŸRª,É)€N£‡w5çÕȦj¼2i® œúxˆr¢â U5·˜Úuñ›üü[ó=`Cod+)•å<̧©K¦úºš ˆ¶±Sÿ)Ou€Œ‡Y2½á)òSò ‹ÿx/©m%EÖµ°T¾×âWŒêno* „Zºø ÕËŠ¡òœØlpŸ0ÎÓJlÙcìø.Ÿ#×øÚ^³ ¬g›Ržu[Èûï‡^ç÷¸ª_¢¼LNýzXók×màQV–qìÏo¾?ž`â5ÈUPëL"„:6£¤¦šy¯¾ûóx !\–La i­<š»G±ztšeY¥²¡ß¼!„Ú%õë÷ú#{|æÏ½>™ÓmæÒñw¿9žzífþ¨q6§¿õÕôjµª®ÊªxõÓXel¼¼-Šc¢r•÷¨Ï—¿ûY³ó¼ëÛÿº˜£¤ªœk‡®ø¿>ܪëÄÖL¬Ü·eõwÔu¿U×-ºr$ÿáó«6ÈÈ>ý%<üDääýœ'½¾aW\y"<§êR?m‡_ãÌÔ8Éz“|¶@‘—ÞŽ¶ßújbµF2殺¾Ýͽøù î~t®j«2·!G$ê»úû—<Õ½”ä–¨j¿á „::㦦5F‚«ý(*A¾H$´±±æŸÕX–µ±± …ê\[¡PÖhðð‡V…Qf…J¦à:ýÙa¶Džxè‹Î®~…ƒÖŠêUbѽ¯€<ñvº¼ÆVŽª,?åÞÅ¿|úYÐÍ<%·-¹¿çË^‰ÏÑœ‹\«®ª(%ò覯wÝ/e)Pª'!„BUÃRSWW'íÓ! æc3«H¥ÒôôLBˆ‰‰ 7C*•ÊårýE!„Bͬñ©iÝÓ!jìÀ]|×°pB!„2–Ƥ¦õK‚B!„jê| ,¡öÁËÓ£¥C@¡f—ÐÒ! Ôö`ŒÚ­ä´ô–!„šœ»«KK‡€P›„Ó!B!„P‡Ó!F‚g͘V£$úalÌÃØ !„Bµ¸‘ÇÄÆ€¥…¹³“Wâݳžó2Áô»(>W’Pªbi…ŠBµZb‘Ȅϔ––*•J]ûP Ò|‘I»?@¤©I’ಅ|ìÒùÔ z¾ÇÓ˜ëA)<Œº|íèq˜5cZÌÃXÃóà.¢á¦f¦< ci)éé$¼’Pz+]êæêÒ£»‡H$cã¸T»¾ì +;§u5õëí£^¾õ ‘­!„Bµ øŒX,îÖ­Ÿ¯3P*•±qqO M„¢æŒÍ(Úý"MM2'ØÔÒÒÔÒ²)Z6œ„çd!Pÿ8˜ ºõeã3‚ÛÊM‡ðîÙëg:›ç%¶µaLÍ @ª45[fœ—|ûö±´´ …Ñcœ;;:8;:p©pc0 Ãçóù|>Ã4÷<ïuk×|òîP T³¤™c0Ð{«ßª³¤¾- „PÇQR\ìääD)UèF)íîá¡T(êÕr+yãhºl°VrfZ£ŸŠ& 扄ВãÀB>ùpzónuâK¥D\r÷A)YÂÞ={Ô9,‘¡~ yò(OÑÍNðÂx+VE ¨¨ÈÖÖ6êAt\|Žizxv‹70B.æñxàìèf¦ùÓ_µÒq¯t¹º9¹¥úo>_ÿCK‡€B-†RJ©AuÍ üžOŽîÕÉÚŒÏe%F];{ö~‘1Clœ¦;Àuk×|¶~£CÕªa½4Ol­P“$ÁJ•ª)š­g ´LƪW‹¥ªG‘—*êÖoDèíK1WAÛsZÂ*åT©d@©d•r%!UÿB¬,-!”Ò®]Ü{y{˜«3`nFÇkd¬þwkà¿^XþÜ’àƒ‹‹Kj,7@Ôæk3߯týáÒê¯ kï ÓÆöq±3Š¢Œ§¶Ÿ|H ¬[»fëdzû¹˜±ù1'»æ²xN7s6ÿ^ðo‡“ P*è3uþ8gk¾2?ùæþÝ2›xêK͇L›9¨›Êro8~³„xoõ[A×’¦õu³ó¿ýáÇ÷V¿õõ÷?ÔþúûôTßz)vªo{SFú$5òС˅8‡ !Ô†œ#j­ôf@›v‰Ï‘™Øuñ4tÜÿɨ6V`ód™3—m°&L‚[öš;«RUåÀ S(û/|‡Ç׾ø$Ø@¬RÁRЋÿ8Ÿ¸ÙðX¥‚UQõ_¿›«+ÉÊÎî×§·ÿrìíœì¹[u†aœì)¥Ù9¹†Ç¦Ö€$ø^TÔÜÙÁk.7 w¢ Ý’øákC|}½Zù»‹¼ÏŸÜûkò“R0÷³lÁK×?ß’Ïmz–\ýíÇxE—¥o¾Þ-üôï?&ªº?÷Ö¢±‡7\fâªûKÛÿ'YaÑoÒó/.Šø|W 4å§UÑÔåcÍÃvþq' ú?3ÿå©÷¾=)ç6-±Û¾íhŽÆg*õx0¯ï³ov¹¤¿úsŽ©»wŸÉ(3qô»dÙ€Ë?F4Eü!Ô (ÕÅ_›C ÍMˆ8•Põ’è82à™ÝÌi~ÊÍ=ÿ„>! w0å÷Kñ³üÜm%ü/6|i¬¡“¦;@õû׺µk¶œ©}¤”ZŒœ;„‡­@ù$5òH×aËj¼ßrŒuöB|&?7®§‹¥˜–eÅ…Ý%—›cÀý—««çœëˆ\ÔÚÜÑ>N–Lé£ð;B3 Œ–Rósæën/²ìøë{ö_+$ú:JÝÞZ;ûèúIEׯ®]²þ›Øýº“©f(™ÊËÐ"¡„,(¤2®U±Ž ê롌§÷k}Ë~%ÖÙŽŠe5W#wË- î]!¿~g*äAùxo©„z˜>Ÿ*”ŠsÑŠ>½ª.Dssuqsu€²²ò:[s°·svt`¦Æ‡HBÃ0Üxpò`õ?ZÃÿõÞ¸áääøÔ$ÿà‡4—ëÛ5'o÷éÚå®ý•ªq\7¬\,Š=»†ÎâÖwžŠ- „> &dÕÎã ¡÷ƒaÖ+`Γõ×’ È¿}8xæÚ€ ¬]“zgö$¿È$ óÆ?Wǯž's›ŽÊÑö¯‹µ÷úÐä-¿%!zª½“A€2ãV0ŒY €I0B¨ kdŽx^‹ýû‹ŒMΓ֘Rð‚Ó£íÛŽ¦–˜¸ ^²â•!¡ÃAï`Ê û‡n Î(§ {è¤U šÖ#Îzu‚É¿ü™ Žg/ëZ«V}Qk/+fùfîݼ;±€š»ûðËgë7Ö`Òsε¶)˜úútÇK;~ÛýHjÝ{€ÍF+ xÍß<ä¯M7ÓÁiȼ¥«"¿8$×Óòxsœä½1ðE×Ä|{鞆eÀ`¬$Xhn*U$¦„ÇP†€‰¹©@"æ R€Ì¦l--Ä"}9n¹TÙu7¥P±ZËeJ[Ï¿ï+ÉìõTÙ4ÆÛ‘<Ȥ'P+K [[Û¼¼<õn†\—“«Îqýúö&ehäݨzEv¶övPcP¹o/o–e+úÊÍÓU}ÐÀöö\Ö«¹Ü0„Änº8ããâÏIÕ…´óèùþ}ÜíÍÅ.ª_J:!@H dT."à¶vðúèÃñU*­ñÃÀ5&õªsb7ƒ¨^‰ÑÖ =,³;ûÃ>nˆžê)•ÁRBL?Bµ¬FæˆW: ž>túsOÛ˜ÈrR¢/ø7J^ñ"¹+øV*!Ê´«Ä—ÂAï`Ê¡}·3*_`8tÒt¨¦õH{‘õ·2È ß:åãñ5jÕ÷µö"ZÙÛ™—¥&‡ùEkE=ç\k›süøÖ_yD@Nԩ͆GèCίO# ãêÖ‹“ÖÀ¡=z:âH÷žU­] ¾ðO9!‚Î%;SZ4 ¶Ú^-Ç-]i¦¿4 ©w³ M—õSêJ‚ ™¯¬báH{$ !}zUeÀåRÙ͈ÈÿƒiÀð­¦ìÜsñðn6&„ðM½ÆÌy½­`Õ!DSÿEœE„ˆ/×DǸzÎH/ÃðX/Ìs†HªRIϹÚþÛÊ1Ïs·`ÀÄ®÷ä— v_ »t°«˜±ó°å£hLíeíˆüÈ1Öî,ÏÔ¥VѪIF‚Y–»G0¯)7JÇØ*WîíÕÓ«‡§á­I$âûÑ16ÖÖÎNŽ\‰±2`£4 úrãÁ™YÙuŽ—æÐ,!4웯*?Õ½{ì»Çªv ÛX»ŠÖeBØØàØ}}5@í»üªKR~üïª9G•Ÿ,kTáVk·S¯ê!ÔF1 òl fY–a´ïóçmå¤1óýÌùªò'é±'6]ÑÿeöŸÇ¢VL]ñ¾_Yš—±@û¨¤*ä{üg?µd„)#{’v'¬C'Í€j²[BçÍ[üÖSÅ“ÔÛÿP÷Ù5v0Ê1%šÏœ»2Ð’‘¦G â wGf½ðƇOó ÷&kà9WSÿåøŒ¹³^kAŠ]ßep´²ý›C篚 òœ„ó›öËtÞ~¹–ˆ­9Ó_*Z¿¿ás!€xyysK~~¾))i„o„Ì5s‘œþnÉk€:MyROþ®µÜ,ñ‚: >tôx­yõðŒ‰c2xà@g'Ǽ¼¼°Ëõ¸Éš.}|¼ÔÓ!î=ÐzýU=8ÚÛAeBÜlô<¾¥eïVèå鑜–Þ‚ „Pópwu‰‰Khž¾,LÅÀÚÚZOšH)MOOW¨X…²åP_­ä)½n­õçŽ4Qûííóæºn?~^Ï3D•ªÎ]##os«M2ܲé/GEY†ËqË䊪!aC²”FÇ<ŒŽyhxkÜmX–ffe©ƒ¯ñ‰¯¦fN9x_n„ê8&Âò²Òììl=_cR¡P$KÊš· Õ5nÒzÞJy€ôúdŸ—£3”ÎCGÀÃoê[]}z[Ïùl"”8ÍŸA¬§†kÕ²O´hB¬ŠåóSF,¨:A ! P]7ލ“gwïž= ‹Šâ’Œ&B!Ôf››™YXYëI=(€T*-ÒöÒÖŸœ5ò)¸´ïìåSm…мäK›ƒõÍðZÿé5Šuk×PªÌ:ÿC㯃l’éµ8œê šs:BmZéMrw„B!„Z³v;!wW—–!„B­&Á¨}Â/B!¤N‡@!„BNCF‚ËŠ j…$æ–-B!„P“hàtLÚ=ü¨ƒB¡v §C „B¡§%/Œ#¶è,¾Þ{+)¤Hiž¢¹‚B!„Bí_‹Ž×Ê€Ž›öÍáOž]íheöÍá°oŸ@gQKˆB!„Ú¥M‚õ×w·JëÖ®Ñõtr„B!„ õÜ'˜as±ŸWñÐæ¬‚œÑï>£kg.Ç¥”eò²’'YIQa§ÃSic!B!„:ˆÆ&Á¯¼¸B$Ò2]¡¼¼|ËÒ‚€ÇiÊ’©ƒýMEu¡½¥mÐÛ?ÀÔOéªøù†¯¡M§>Cg=忬Wç¾ÙG|¶~c½!„Bu$M‚ÿ >u²–ò³ç láµéËFLÍ-Ê?pùDŸ.Þ~½€!Œ¹Ø¬Îº¬,?ùæÉ^ë¦÷\°Øôó¿Ë rœX Sß÷>ƧT¥(/ÎM¾s|ÿ¥ BÔ»UìC•ÒÂô{'vžJ"@©dÈœgÇxÚK4fbp RÊôôß»‹OUšŸºç¿8–Ô„¦ÝÆ/œàãfk&y~FÂýðÐÐè" ”t;Ë¿oW [–—|çì?!‰TK`Oí­úÜ>ºõÒ½ …¹×œU³Ç¼ð|ôç[óÕ[•wöÿr*VÞyÁ»ÏvôìŒSŽ€`êÊÉÞ|eäîïOçÙ=ýÊ¿ª³ÄŒ_µ`˜D~5èës¹ãßx~ØÜwT_~JktºhÁî„ÜÞ÷󉙕{¯ÖCôÀŒYµd”)9øÓæDñ°W^³`µbãw«**"ƒ=[ $Ϋ>|Ñbôtrñ%à2_L-ú;Žf´¡-Ô÷46F^nnsv׿ØÚÙµt!„ªÆs‚Ož<½háÍ’§N^]"@~IcÍ®Z·Eìü—[ȰfÎ3¶©·8ú°€špàM€^Ç`¶ŽO*&¤èøIð›¡Þ?p¸ŽœM“’zö8 ›%> BÖèÔ€RyÏÖÙ2-51âLbW>{”Bƒ£ )ûƬ0=.QW >[@¤åÀ*¿@Á±}J—"ÈÊ †·ÐlòrszŒmÎÛœÌØó˜#„B­Š’àœ¼¼äÔT÷N¸ÕG’óòòõWÑŸžèÕÉsåôeÿœ?ìdmß º@Zí ´ÛØùc}:Û™‹ S‘:jîSñÿ2 ¤â^ÝàAŦ‡šû{@lÅZœÆîÕüu5íåa®“g/EQZtÈöC÷<`üǯڷ§fÅ8uþZ¸=‰¾ÓÅs‘˜žX(J“þ* õi¡Y`l[;»¼Ü\î¿Íßu3÷ˆBµ ƹ;Äñ§^}éB˲'NŸ©WÝ/÷mújÙGC½ õ . L@¡4èÄo2@òÑÚ›–,Öð]?ÿ÷¨PÁZ~úñJu¦[Y™Tþ§J"€@Š<¸‡æ¦$€žU›<+w¯©ôÜŽoÏ[¸¸9Ø»øÎòïÑç™9‡î}ö}±!º®T•ÅÎ{ðI_—ç{ú*ee¥™!N_OÕ62º-8bé¤ÞC—¿=¼>㦊?Ÿ/=dé‡#jÖRßt˜Œì·ðýaí`)YQVÜÝ3ûÐK¿üI§Oê7ñ…wfòUåù ÷olÕ!©FÎà¨N†VvQ¯Úâ3ðÅaN6&äËÍÇZ:„BµÄËË›[òóóMII#|^uÊŠ %æ–FèÛϼöÓàBFL=rÚÄûßجP)AIid±mRÊ|úñ”*?ßðuã#J>ýx¥ùŸoØb”Û Ëúç ÁÙŒÈK #/;žS{ÎÒÏy®»üèÑKi 0/ü\EL©pè¤!CºXJ@šó(~ß¿‰Å„ÀšWgüq-mFG{ ‘æÜ:výb1Yóê øðµ™°ñ×£ê¡YJEÃ&æn.P–¦>¸ÕÕo W®k(wÍ«3¶Þ̘áco#æ}µù˜ÀÜiìxŸ^Ž1Qe§þ{è^TôÅý·FkõŠÙðS´ní|¦ B!Ô"Zt$¸ |¾áKhd yáÇþטŠpý=-‚µü*„'Ž7‹ Ú™œ §Œ~mbÊWgUܦåö9G¦IùN½G.[Øõâ–Gš‰¯&ÿq&w·¥f¹ßScºÏR›ŒmÿÜÈ’¼9Óõâùk¿g–—Q¡ÇàÑsç'~¹§LW_õ¹ç!„B-¤½%Á¨õ°HÖVàIÂ~MÊ  0ü@Œÿ«àl8·iïéä4Bäw.È t5Ѓœÿ59“€¢[Gî?ýZÝI<ÒuØ2n™øL~n\OK1-ËŠ ?ºÿ Þ!„2H“àâÿgÜ8Pkûõr#[Èpˆ¯UÞ à@ÕZ"ÀSê•G•I*!2BôýqvÐxL_2@ÝIðeêÜcΈÎmDb>”²úë%fC^ó7ùkÓÍtp2o骀È/É@8ëÕ &ÿþñSd&8œ½L=ò½b–oæÞÍ» ¨¹»ßØy¿42„B¨ƒ0Î}‚ªí´¦¸h)O\µÖ !ƒ—©UÏVwõ’œR3ZqWfJmªÕјÓ2Ò[×ÿ :ùå¯G6ür^ýJ)TV7z̆ô!ç·…§•QZ–quëEâPQÞ‹„ü}+£œÒòÌð¡êý¥B+{;sEarøÌ€B!CµÕéÁ}‘ø4!únA©HZþ´Bѫ٢Bš2ö¥—Œqüvxj9±²î׿ÿsgà`¼=«Kâ©äL°4՛Ɵl@ããàíIÍ"ýŸ®ú_¤°¨¯eн"¥Äªï躪›*•2¹Šgjí7¬j·€A"rCV«;cÄlˆÎÁUkáCµ•GT<6{Çö ³ÆŒ ko&O Ýqäž– !„BµµÕ$¸v¼pÜì—¦,>öÛÉÁý£ß}F$>IpK!¥w~üÏ'pä¨áVZRððöU®\vælèÓƒç>×W ²Üäû›ÿU6àºCÙg/Lúì2_¾²äñƒËÔeW~mWL—éÃV°åʼnwC¡Ûx­ÕƒBR–Ž·ZÂS–=Š Ç•ï»_¸léô‰Ký%úÕwÔ¤^åtèVVk9èéžx¢¥Ãi ukר—?[¿Q¶sH`àüU$Pž“p~Ó~—pËl 7oñ[O ORoÿCÝgsU‚ÍgÎ]hÉÈ Ó#µÈ! „BmQÛN‚<þKS–Lìo*’¨ í-mƒÞþ ¦~²È¸Ýaܪv~fî[eÞã˜?O«Ú⽜k?)ƒÂËûÿ¼¬±^ùÿ‚‹û~»”Ž^7¤â®ù§¶Eœj†PB¡v¦m'Á¯M_0bjnQþË'útñöóè aÌÅfõj‡¥ì„¡e£]•¦Àd¦‰¶] »0Ïðvv¤Âã±—)ÀŽ%¥)úîá3¼È;Úò»Ò°ê!„B¨Åµí$”*Õݤhõjw—./N^ !·/ÞHÙU‹SÃÊ–Î(3&+MòÕ5ÝSVt"ЩèÛÊ¡çwöYÖ«zû£ÈjéB!„ê‡xyysK~~¾))i„Ï«³NYq¡jÀ–&¬æßÕ~CHÀˆ©³GN»“xÿ»›*%¥¢â¢·[$¶Žwëe‰yÝCÚy¹¹ ˜…ƒB!d,f#T©êÜÙ52ò6·ÚVG‚¥åO‰ÄgjäÁ,¥û/ßé8·J©HZþtKD‡B!„Zµ¶š+½ŠÞ!„BupLK€B!„PsÃ$!„Bu8 œaû¾qã@­MYK€Ú1/O–!„Œ)&.¡¥CЮ½Þ÷<·Õ9Á¡6-9-½¥C@!ãpwuiéôi7¯·F?Ï­# æ™0=&ð-¹' Sʪ ³Uqç@%kéÈB!„P;Ô*æûå׃,lÌ­ì,­ì¬Ì\»ûv5€B!„ZV1Ì„<¡Àfò*F ø”RJFdªjÊN×­]Ÿ­ßØ” „B¡ÖÈIðòç–;y*''§¾©@¤²tgx<hyÂ-B€/6“tíKa)e€>7£P¥"§¯™fåU{Ž—¿¥T%/ÊNºs"øb:Ñ÷¨bLyB!„š’`KK‹E æ%§¤ž<}F*­ù(c=”]Æò­x|ŸÏgø<†a‘+” Ã(•*†GŸyš€ÞÞ¥ï~kQ»úgë7R"tñòб£W¼ÿùŸ™?d\ëÖ®©ñÁ£vIÃÚA!„Úõø«(+ÌŒ»|üF™¾Á»ÚÕ[çû KÉ€—ߘhJ¾þþ‡ÊÆÛ?pœ·“”fF_Øu6–¢§ü½ÕoqÕ-4£M‡pïÜé¥Ë#"o_¼|ÅоmœDÃTÌK¦XJ©Š%,U*U( * <\u^G¨ìñÅÝ0öp  äÞ'kFÈûcýo„P:|ÝÚ1·>ßð¯ú/¯öx0WB©RZ˜~ïÄÎSIÜ•yLOÿ€ñ½»Ø™ñT¥ùÉQ¡{þ‹c ?âÚµB!„táÒ"¶é>(pÁª7¾24_jÕ<—øÃ%€QU%Þ+ft»»gkðcp¶0ð¯Ø_ê+çr_u*ÜœŒ9'˜a˜ú÷éÓûÌ™ÿ’’êÜŸ0„†UÈhy¥,Zðø­ wX–«d éЋ;‹G>ga·ll 3†Bï Ÿ­ß¨k:„òÎþ_NÅÊ;/x÷Ù®ƒžqjÃ1`ƯZ0L"¿ôõ¹\‡ño }’þH€·»ó' ¯ððhÑ8ú°€špàM€^Ç p¸ŽœM“’zö8 ›%> BꯅêEðš¿yÈ_›n¦ƒÓyKWD~qH+fùfîݼ;±€š»ûð ÷¦Æø}«z@!„ D„6݇‚ê*·ªëÝVØ?üsKpF9…Zo|­ä}Ðnf õ埯)ÉhÂaÇ\6 ¸0 FOy 2~¬T*Ã.]ºs7ªÎ=UJ•J©bUª Vç¦ ) <÷†Pɵ'ÁëÖ®¡”²ò⌘°Á@)ß÷ŸòÉü¯tyÊ’J“ö±•§[‡ÊÓ_„TLÌàîÍ[±)ºÕY i¥9DS 9¿><€Œ«[/NZ‡ö€@heog^–^˜~ä—æ !„j*Ü"«(+ÌJ<óËe.AÑõn‡öÝÎÐ{ÑËbMý—¹\þñˆªi [µ– `©¿¼3 ¦”ÞŽ 9ÇÒºw(Îeù. …²‹K>«¬¸e1KY ”e)}œÁÓZ»Úg Ê_Àå—Y[,Lëüëᾂ¯¾W@O€< ±ÎZH«ÚÆq ‚«ŠÃ†rK;¶_˜5fLàX{3yzTèŽ#÷´ü1µ†¿!„P½TóªÈ!t½€žÑÄÖð>8z™Çå-[Tµ’¡B;€¬Š5;€Býå-ÈhIpVVöñS§ŠŠŠëÑwÒy¶ +†ç¬P)XªØKwÅå2"P¹‚”– îšÖ#ŽÇÞ§ïô"²w«‡Ý!—R;B|…ô¶¬îÔuÿmåG~‚™\ãCsì'LéµCõˆ `(@XÅÚ`€ÇÜy|åð®+”+Io,xîȽ  ”¥ø±!„P;£ëÝ Ú`[+|n"U«Õ!Þ[ýw‰ÛUìR€` @8·ƒ®òd„¯òËËË>º{ï¾zeÀ@”R&'šŸs_® *%eU@Y2Ì«t|¿’¡žEr9ÝwÞâ@˜Enö‘`ímEð)Pš¿SãvmÛÃÓ åìÌw?Òõí¼&ÅñM‡#õ[øþš·—ô-ˆ¿~àû³]œ‡ ·/†Ž]:ØULˆØyØòQ4¦bÊõê9#½D ÃcUUç<`ˆ¤ê_¾!¿D„B¨õÓõnXC+|üúûÔ? qƒ³¨Se.ˆ¹I"qµÌ¹ôd´þòd„‘à-üÕȤ2¢’«X}k³ó÷/¥KuÙÖñ-@T2Lñ„âÝ¥˜JþÛþãúÑ\%Dzçäî;'56W6¥§ªÙþÍ!óWM@yNÂùMûeÜIJ4Ÿ9we %#+L8Äí¼;2ë…7>|šOð„#„jOt½ÖЖÞ£ÿ<ê8}ùPS(ÉŒ9¹9¦ò[yåꛣq Íy·`âååÍ-ùùù¦¤¤~ÝîeÅ…scNg¶6W½9%ÍÎTÊÝ2˜d—J~<æRPRïj"îôÔ+‹‹UÑ›ÿ/8¿}kÐæø[ÎË͵µ³k†xP{âå鑜–ÞÒQ „q¸»ºÄÄ%´tÚµ§×[=çÙÀl„*U;»FFÞæVwˆxRÌ[··sãÛ¡ÔdÝÛ‹©üIÜÙ˜#„B!]ZEl,„ÈÛÀ×!„B¨¥á=nB!„P‡ƒI0B!„êp0 F!„BN Ï æ1d‚‡iw;Q|®4$¡Teè³æBm›»«KK‡€B¾ÞêÒ’IðÑpS3SžÀ„±´”ôt^I(½•.usuéÑÝC$ ‚‡±q1±q-$BÈèZí½„B¨Á×[=Z2 ç%63gøÈ.R:X0B!3N ¾•.õíÛG Àýè˜Ø¸x#öHü^ëa/á3¤á·›æÓ‚·¡@!„j»Œ/nɱ“§rrrê[Q"!B!üòäQž¢›à…ñV¬Š@QQ‘­­mÔƒè¸øŠ/=<»kfÃ\J)Ë*äe%O²’¢ÂN‡§Òºo ¼xJOGBÎýøåÅãÌ»À„!„B¨-2Â…q––‹Ì xf¦H$ªWEBX¥\©T² T²J¹’V½ÕÊÒ‚]»¸÷òöª]ýó _mü)è@x–}ÿeoÏñ¤uçµ]àb1«7„B!Ô¾m:„{çN/­Xyûâå+Va• –’€^üÇùÄ͆Ç*¬Š’ÊGf»¹º¬ìì~}zS .+ËO¾yòG…׺é=,6ýüï2 ”t;Ë¿oW [–—|çì?!‰”nÔÖ}üTßRß÷>ƧT¥(/ÎM¾s|ÿ¥ B Ö¯Öߪ5¶ÒnãNðq³5½|Ô;¸¹º ðó%„”—Kõ´CoŸpŸQƘUKFy9gœøùÿ¾ßvSâ9|ÁêÑ4RØÏÖoT/èsûèÖ7nü~Ë©'.Þc^xÞ¶ªÙº†–µ6¸hÁOG‹‡ÝøÍ/{.¦šô_nÈ©@!„BÍÉøÆ MLfLŸzåêõë7nèßóJ2{=U6͇ñv$2é‰ÔÇÇÇÊÒÂÖÖ6//O½›……E]7ˆHWneö( „GRöŒYa:z&\<¢µfÄι…ü{aöà<`[åFU‡ª-¥ò ž­³eZjbęĈ4‚PûæåéÑÒ! „j{Œ{³ ã'ÁJ¥2ìÒ¥;w£ ÙYÅ‘(öHBúôªÊ€Ë¥²›‘uŽÅVêiÜŠ'Œÿø£ñU;ôÔZv;¬Og;s‘€a*¦a8jn¯ø_}.¢ûëjÚËÃ\'Ï^Š¢´èí‡î×£:BDrZzK‡€B¨-1ú ™SJïGG‡„œ3ü‘ÎNNžUs'”ñ› |”[}Ð`ߢI÷‹X²`X7BÂwýüߣBkùéÇ+ aÔÇÀPª ¤2É6Hé¹ßž·pqs°wñåߣÏ3sÝßoxu„B!Ô Œ6'8++{ëöÿ­G ½¼½l5Ô7&&VLysª •>ܳ³”+¤JYYiAfDÈÓ×S+Ç}é¥_þ¤Ó'õ›øÂ;3ùªòüŒ„û7¶jma[pÄÒI½‡.{x­1ãCß´šÞ§ë¢Õëô'oO_áç<óÝfV^'÷ç]:ÉáÓ–BeÅÝ=³ î#„B¡æD¼¼¼¹%??ß””4ÂçÕY§¬¸Pbni”î½zxÆÄÆ1 š\¿»lTWã¹$µË[•6$B!ÔÁÿi ŸoÌ¿¡V„?ÇnÞ€óø'÷6äÎ˵žÕWgy«Ò&‚D!„:šV‘£vL¸ÈÎoI¬zމ®òuk×|¶~#7¤ªùj®¼¢âœ¹ÃºÛK ,;þúžý× +½å|Ì3º9˜1Òü”›{þ }B@`í=aÚØ>.–bFQ”ñàÔö“©Îëõ4þçÕ¤Y~løòœø«_+®|:·Q‚$>“Ÿ×ÓÅRL˲âÂî¿’kÔß B!Ô¡aŒš¥æÏ»©ŽÀ Õ˜çÌéïÅ„€É3¯ù›†üõ¿›éà4dÞRõ¢u•s¸[¿±yúB S#9nX®Lé ÅϲÿsKsyñ³ó¹­5”¸õîïíîjcÊS•f?мx7®ëêmýýÏÉZAçÁºÛJ <79âLøc¨õµ†!ûèŠMۡ׬yƒ%D½§®öu•ëéKó¼!„P;ƒI0j&”RJ©!{êš ±nínU”e'…<|µgM Ãôíi{ˆæ2—óÕHm`jOQԃ˷2 d‹Î½Æ¦èïCÉP=G¤ÖO-£.3Ƹ$œ9’v}Ÿžà¾ç`JCöÑ›&"w|ën_G¹¾¾4ÏBµ/˜£æcp¬ó—µ%B«.ç.~mÔÕ—Œj¬|†èál%bÅY1aî.3ãþ»s÷ž«Üò‰û™#=íÌUaƃ°˜rm?=zCî^-ËZ‡Ü¬XR%Ý Ùc"@²æ”B¿I¦—ö(jbë:\œ•£$¹wNäôÓRR°(µ}j”Iä¾èþóª’`]í7 ß:ÏBµ]xwÔL¨ÁênJVxé C¹UõqÕuk×8Ž xéw>þè½w_™?ÎZ{Ë®¬|ç½O>xó•9CÌ+{_·v󨹝¾õîÇ}”šŸýüÛïðñû«^š=Ä’R˜ñчã(Jû|òч}+–GòÑ >“—¾öæš5~øæò9Ãí*O‚ ÷”ů¿óÞ'¬^¹`´“Žî:¬)¶Y×ÏÚµ÷ÈÙ³¾S{Be¦»s÷õ‚æjE-»´KÇöï:gâ0LÀjŽkRê<ŠÏ©j.Äv¨¢jZMì›æ‘¶l»/ÀMõ|EÞ ÛG—ãµV£ý-ª‰®öëÛo½ÏBµ)˜£æc¬$˜­º\Ê«uîù‚Ó£“Û~üâËŸwEZzeˆÖ}žw‹?ôë7_üoo„éø7gš¨ËWØ?<´åÛ/6| €×üÍïîÙôë7ß±¿*@ÇÁÈnà9Ê`ºtNÀŠY¾yg‚¾ýêËo·…¤ØÏãÚd&® °=úû÷Ÿÿt±lÈ‹‹ÌµvW#­ï8Î…Åç–³@åyÑ!@úX+òll¾ @Vsö.é:L˶Èô‚œËY5—ëBÍ}¦=eöàXlµBJ{M²Ž:[¢µŠ)À“ªµBÓ†ícPx¢Síî¼_3KÕÕ~½û­Ï¹B¡6“`Ô|Ÿ¯[»fÝÚ5Ÿ¼ûêâ±ÅG~ª{.Ä®à[©EJ`ËÒ®þ d´Ö}BƒÂÓÊ(-˸FúÌR—Úw;£¼"˜@r~[ÅnW·^$>ÀîÏ#3L S/ã×LÞlg0ŸArö³ ZÙÛ™›( “Ãüµ3gˆIHеä"%”çß> ]´v×aeVæ[„ÈXëaÕb€·¤9Tl6F ¸PR{¹¾3§uÍ:{4¢¼zh1¡¿<ô6Õž–XW­Y”6l]4ËwšÛÝ£ØZIª®öëÛo=ÎBµA8'5–eÙÂgë7RJL¬ÝGÎ_ÐÏî$Ô±RURUBˆ‰Ö}®W-Þ¨GÔü ¼3@pÕZ8ÀPéžR³™”þ;_Pö‡ôÀ “eÖô§™’²¤vl¿0k̘À±öfòô¨ÐGîý?{w×Ô•þü9!‚‚û.ÚZÅ}G+j­V‹ŠbëÖª]¦­]fæ7U™¯ßºu¾ÓvÆ©c]ª-Zwp·ZÛ*¢Vp©¸VTDAYdß·,ç÷G „äÞ$l šÏûåËWrrî9Ï=7É}89¹áDÔ™Èéâ«W=sÁîpÁ ]œsâ\»úVï.u#ºUy³3Q†Áæö£ù¹l"Vã¶q²C‚¹Þùéèµ\ýªÝÆù<:œ!ÖÄu¢~2úYs-lYŸš¶uÌñ‚½#͘PuW{Ý ±ökÕ¯ùcð”ÂL0XŽ™3ÁÌèY—1®È{xjãq¿Ðùš5µœ»TÍsÞ¾¶Qé,’èG¤ó-}4ëQeÚ«1€è11V°WÙ~¤[¨“j_>+ˆP9½î<ª­""1"bÏܱyÍÿøïÑô^“æj¶L$:¼êó«>_^ùï‚Ý®<¢nŽ¢w‰(`tgO{"{n£{ñ‡•‹dª×ÎÚp£ßS4ë{[œûs/Nï/»~ð'à ˜;`—£+j”ë®Ó}p¾´ù„ç}ˆ¼{Mð)ý-¹nuÄèÖѬÖ[0m¤}±raæÀÓ 3Á`!f®÷%ÝÙQqŒ_ /ÿ†?ýãEsš7ÈoË¥t…Kë€qf\Rª¦Qsú&DÆ¥S‹þÓù­/ëìçŸÎpïåTò:œÇ¥)O>Bóf»Ñ/ˆXÒ/ä5w ?zJ3uöÉ´aÇÎ\¾“©T«ªç¿#bJÃf)<öûƒ\µ“oç~CÇDG®5ìW/Öu*!÷åÉ3H%š$Oï.Ën3|R7;e~ÚÕƒç ®Ø0¬Ýü¥r>S÷¶NB©{¹ "šÔ«9õžÙ¿ªÃ;vå3FD^öÍ=qÚØìèÃÃgZŒ>é99•f%Û—$T׌:b±™E¬}‘rá¾jŽÀ³I0XˆD"Q«Õ&Y­VK$fwÓ·¤Ú4˜VÇþ¶þl‡ÐÐO‚ÕÅY‰—·P·…µ lKj·©ïyIÙ÷£Ö¨œ‘-\2$$tÑh'*ͼzmdyeµ[Ñ’)#¿LĈ._eã¢nU¦ ቮ“§¿â.)ÏO½²?\ÓŽêä×»ƒ¦Ž›3ÔÛYRž›ríÌÖZ…úlÓËó´wK.ýy©º\ï.å\;}øZժçÝ|P+º]yPtoö(‰n³‰‘{ òBÝúŒQòÅ“ÉEã1³N-b3»}súÕÒ+€gó÷﮹Ð;99…IíLnSR˜ïäêÞÈ•™y”³³²šy{›Ó ›³\&“yzzɃ9ç©©© •Z¡Äe™že-¼½ždå4HSø±eaäÜaf6•ª¶m[ÅÅ]ÕÜÅL0XˆÌÞ¡´¤8##ÃÈ¢Näààè(wR(K-Ø$Á`!ù……®..nžF>_åDeee…¸*˜ ÓÀP7H‚ÁB”JUn^¾µ£ Â%ÒÀa&¬ …·—µC›†$,­¡. PgX6I0Ø$Á`s€ÍA 6I0Ø$Á`s€ÍA 6I0Øül2X“ÜÑÑ^*)..V*•bu8#‰cÌr‘À3 I0X“L*‘Ëå;v”JEߎ”JåÝ{÷róòí-<𬩨°Ð××—s®Ç9ïÜ©“R¡°v°ÏšÙ¯…Ö³4í8×À-pÈð´g’`°&n"[ Á%-û¾4uZÈë¡S‚Ç hë¤)¯Û™ØÈV xj‹ÙHAj¥±GC¯} ¾`¡¡`9X™&Ç5IlAp×)Ã[ß9ûklZžBêѼm—îc(ùPƒØðêó;w7jlP8:€A††‚$¬I;Ñ[g}dtäzJ cŒùé÷/§ß§ª¹"ÍÿšS¦Ä¥uŸ½:x;;0eQvò¥_.¥0¦©sôfʰN-Üí$U%dôDkØÔcê7û5»í;/hæ«9§!¯ÍPíÜ}Ùµ`§‚1çö{¡m3gI铸s¿^ÏÓ‹pÇ®=³_ ÕÄ6ûµÐo¥ëâí*Qå§ýqòL|)c†;^s´Ÿ1êy?W;uQƽó:åÒöFôn×ÌÕNU~÷ÌéyUrxõ©×…¦}ÝÆµw9wz.0ðy_W;uQfB¬_±¦Ê…ûÕÛeミ"’áhpîØcX`VîŽT–—uîNIÕæÇã3†wò–óÂäØc·½_ÑÙlj Ï‹É`bý¶¯3‡R,øšÇÈ;øµ¡±;¦W>—d¯¼6éÊÎÈÔªšƒfÎ(ݹûcœ·›õÚÀßvîyÈçÏÏ~M¾}×%"òè1dhW?G‰¢ðIü™èë…ÌøP¹šlX¿)?mt_‚Á‹m I0XY=“àk ès?%³ B»fB“éžù¦jwíRôጢ2æÔªç˜'Æoÿ±HóÐË;›[Ndpž$ÔÔï¿óc/ü\NDDŽ/vâW·3*Ò©`ÌDd7à•!·~ù1êI…k»€áD?F¨ëeï”cGNçG·a㦠¾¿#Vi¸ãºdƒ'õ±;vè^yùײªœõ™4ÜíÖ‰£2ÕÎí^š8êÞöÓåb]Ù‰}ì~ÿñÐý\òè:tœŸ©rcýŠì²‘ãhÈp4dC'õ•Ç?t/›¼üGŒ š¸ã|åf·HS68©eü¥œVûMÝ3f×M±~Œ¶à¡4'xƲ~LqœÞ“vÜ$""éHŠÓ©:O’Ø'ôº/]{BÔr+cƒýèa:‘_Jß[9tÍžü|"6«LêÕ-èå Ý®ï¾kb¨…ê×8¬Mìi£K0xs6 ¬ «3Y° ÛÎ&9w4ùõé“'´³®¹ûÈo·Ó‹ÊÕÄT%)q'Émö¡sщ†É–‚MýqNÕ|¬»¦‚ÇXå¹x#ŠÅ<¢³ôÚ‰ÛOJ9© ’.ÿh2¸_ïæ”•çÅÿzul2òáXÕ&9·¾Q]î/»òk|f)§ò¢1gÈohݺÞž]9•[NTžwçäUÓåâýŠí²‘ãhŽáíØÕ_îfWŽÀuÖnˆö¡_.§+©,å,cο\I+Vñ’Gg‰üëÖ¯à¡4³‘Šè+êž2ΉÈn”WɯYºò3ly•äɱæIDŽƒYÞÙÊgÑ©3 Y¥jâÙ·OëY¹ãâC-X¿Æ 5±§.á5cCÂL0XZ­®ÏæL•zí·ÔkDÜÁ½­ÿ°ÀࢤÈ{†Õ¸Ïó£ú´oîîä •çÕ>$ªÕõ‡…›zU6<¨%ß—B~ã\Ê}DÄD;‹Ù—èœPbÞ©¾y¨‡ÉÈ[­¾w—èÍ­VDmgÎ íÊëê?$jÕEÍöï/7ÒïC‘]6rÍQ3’{”Ã1VFD¹U·“Ö­_ÁCif#ŒÝû© ÏÄf´?›FùžÈ«9ŠèR§Áœ_%+;¦øm¼lŒ3?4ı,JQY-½jÚ˜±rÆdšÛF†Z°¾®¦ö´Ñ%¼9’`°:s¿g*SeåùÉW£¨ÇKD÷*›å\»ØàÅ çJÏþt4£°D¡âj9¯¯ÞPç³f½­ 6ÅXöO²ÉÝh;°/ø1‹1ã ÆœNÔè†Ab_ ìFt«òfg¢ “»ð¤Æ&]´åiDI»ö<ÚD° 眗W.Zm®Û~¢Û•÷:™,7Ò¯Ø.‹ ©XHz£‘AäOt³òÁ®D™‚½˜ß¯Øh J“Ï­œŸ’œ§v¥ˆ~¹¿×lœ±’h•oO§Žªs%¬äŒÊq´ã nʳÅFŸ´F†Ú¤¦ö´1'àºm¶Ë!ÀšÌ\ ad=Ä´Q½»úºÙK8I]Zõ $åšò<¢n:¿­áH¤R©*j‰£O×aAbñèm%H¬©Â_2Yß~û²'¿¯)ó™U¯qþ>rÆíÜÚö{ÙDD£;{ÚÙ{tÝ‹?Œ1¹ grí&þc^ЖŸù£bØèî~®RâÏVÏÅx×8ó÷´gœÉ½»ެn?™÷­®ßÛd¹‘~ň ©XHz£q&™÷ÓÕËÈÁ£ûØž<ù<™G¬_±Ñ<”f> ‰ˆ©bÏ«ûŒÒ*ãWE!™1¼×hº’EDWÈ-ÈŸÇd¿C­ÕÔž6¦®ë†`k0 Ö$‘HÔjµÉßCV«Õ‰pã÷U}{x:IÕå…Y)±‡ok>>•ûòä¤Í÷–ŽŸ8~À¸NvªÒ‚ô„_‰„'áô¶ÒâÜ‘ó²ÊÅš*?•D3ºRòöòʦÅjŠÅ¬ºx8vЈ¡{9³âôÛç’MÞ±ì6Ã'õq³Sæ§]=x^¡™Û"Rœ?7"pLp?©ª0ãÞ jþRåð^=0´ïèçÜåEaÖýë'wñÇѾ£¦ÈxY~ÚÝŸ¨ue’§8÷c\àˆÊö¢x‹aÆËô+FlHÅBÒ Ź¯ >jR€#•ç§];xNa|Ößd¿b£-x(Í|jÜÿ9È„¢ç„Ö…$_— ï}ç#º—Àt‰{dbMO†Z«©=mL\× ÁÖ0ÿîš[½““S˜ÔÎä6%…ùN®îX™™G9;+«™··âg•›³\&“yzzɃ9ç©©© •Z¡TY26$®mƒ^ú`ûnÕÆÖbÎ…,¬ˆóçg¿æ²}W¬™å ‡·{u¶ßÁí66N þ´ÁóÍF˜™p¥ªmÛVqq•_—ÄL0X“ÌÞ¡´¤8##ÃÈÊ`Näààè(wR(K-›Öìצ)Š2ÿ8~·–ß ³E“ûµ9{ëQŽÚËøsô(Âd9âÜcÔ~n§‰åéÏŒÚàùæ@ Ö”_XèêââæáiäTωÊÊÊ E/ÛØ¶ïÒ\Õ&Ò‘zŠ.ë0|Ü@7{eAúÍ#gUÚN¬ Í~-”seîÕƒuûÛÓ¨ÁŸ6x¾9°„a9<ê¶W‡›ƒ$l’`°9H‚Àæ ›ƒ$l’`°9H‚Àæ ›ƒŸMKkáíeíàéó$+§[C VаodðÌkð ,‡›ƒ$l–C€…Èí¥’ââb¥R)V‡1’Ø1Æ,Ø$Á`!2©D.—wìØQ*}Ö)•Ê»÷îåæåÛ;8Z26°5XRTXèëëË9WˆãœwîÔI©PX;X0˲°%Ö Ž0 Â9眛SÓÈRÙscgïÑÊË‘ŠÒã/9›]Û0–…-ùlåêÚnÏ$Á`9f'ÁÂY0wŸð—W}bölÛ˜Kíû%Z× €­@ bþL°˜–ó_¨Ø¹úôFD”ó æPe¬7¹«½ËzŒŸ;ª[Kw9/yrïâáÈóYšï5ÿkêpî:tÚôÁ}œ¨$#áÂîÈØ|Æ4u¾»ôxj¯–.êœøcc[Ξ֧µ«:çFÄÆƒIú9ú²°%›Î%´kæ$]±ês±x–…-Ùp:þÕ¾›»HÊr’/ïÞ•«ßçŽ}&NÑÃ×]Rüðâ¾mQéõ Òd™g÷ÑGölé.—( Òþ8¾õØÎŒ„ʹëЙC;7³Wæ>Š;TŸ£ `]H‚Árê™O”Ó‘D"³¯± ¸wúžõ;ó¸k»€‘3ˆÖ}¶rµ^†ê0å½ ×“[Ö^N%ß3æ-š·â@…æ¡×XÌÆ5 ŠöóþòAÇ‹?mZ“¨ê<÷ãY#®ŠèËçÎæ i¥&vp¡ïíß~TdßrÀœïŒZ}Q¯‚l¯´8·mã·ežÏžF´¾žA¯ð×YÝOÛóMRn1¹v |cæÛ–oÈ1ªý«ï9ŸÜòue$ªÂÆ"xêà‹q`9ÜÞ®öŠü¤‹‡„N„ô`§¿¿˜RÂyIZÌwgY)Ú‡~8~7¯‚݉`Ì퇟ïåU¨ nE lçÀÞ«&3`"Úñû£%©KRb¶aXaZ€4zËù‡JªÈ¼y|}ýƒ4^aõÚýçïç+9) îþú5›d<ÔçYTxu$&÷ ÉÂL0XŽZ­®ÏæYDˆîš]ÛÖèàÀÀ‘>.©7£¶º!¤¶%Ѝ¾w‘höN*cDÄX¥UÝfL&Ø×MóBzPµÜ™±"Æì +t Úß A¯ÀÛŽ êÙÎÇU.“çÕH0ÔvD‘5"©LŽ1 O$Á`9æ~1NdÅÃ¥4³}•¬_^Á¹ çEL³hµ}u;ÏÜqžsæÑiì‡3çº^çÚ+P<"Dt¦r‹Dk±?5‚®ŽY,s< L¤·Þ¢Á‚4ðúì¡Eû¿Ûø » \¡Vûþïß篟L4¨j¸C…`yXbæZ#ë!R6ß°Ÿ=wD'/{ ³÷j?hò{šòhNóùÉ%$uk=`z¨¶þ'Ó†ù7w”HìÔªê ÎL¢NÕ ëÞx>rÞ€VrÆä~ƒßÎã çakM,sD^UÎÜÎMBöÞϧñ‚Ôp&R*ÊK+”v®múOc²~Ä->jN_?9crßóª—sà‚ÁðÔÁL0XˆD"Q«Õ&Y­VK$ÂuXÁÑ/™9fÎ0/G^”~ûÂnMùoëÏv ý$ÈQ]œ•xy u[¨)Ot<ýýwIy~ê•ýášÂqO~¸ø%)Ó|‚_¹þdHHè¢ÑNTšyÿôÚÈrc—)6X<æP]wtÒôàwFº±Â‡"5«,#HÍGn.˜°ào.Reqöƒ+ÛˆLÌ—øæÔôÐÙ“)rÅ…S›7$ ËcþþÝ5·z''§0©ÉmJ ó\Ý90°23rvVV3oosts–Ëd2OOO#y0ç<55U¡R+”ªZÄ O›Þ^O²r¬pÇyΙG§±Μ{èFxeœ ^~B°¾XãDÕ×°`,eKŽÃûh9 qÌÙðˆ137+Âr°3×BÔv=DDLéäÙC:zÙ3&uöóœö¦ü“iÃü›;J$vjUõôs&Ñ@'áT°¾Xãzr³±ã'ŽaÉßçÖjC°Ìƒ…H$µZmò÷ÕjµDR‹¹RÕɯwM7g¨·³¤<7åÚ™­šòðD×ÉÓßq—”ç§^Ù®)Ü÷dᇋ_’2Ã¯Ç Ök\_Éö?èÓ¾t{E)if±Íݬ„ùûw×Ü 蜜¤v&·))ÌwruoäÀÀÊÌ<ÊÙYYͼ½ÍiÐÍY.“É<==äÁœóÔÔT…J­Pªj+Ø03³®TµmÛ*.îªæ.f‚ÁBdö¥%ÅFÖ;p"GG¹“BYjÉØÀÖ  É/,tuqqóð4²Ö•••Y.,°IH‚ÁB”JUn^¾µ£ ÂÕ!À! ›ƒ$l’`°9H‚Àæ ›ƒ$l’`°9H‚Àæ ›ƒ$l’`°9H‚ÀæH­Ø ¹££½TR\\¬T*Åêp""F;Ƙå"Ûƒ$,D&•ÈåòŽ;J¥¢Ï:¥Ry÷޽ܼ|{GKƶË!ÀBŠ }}}9ç qœóÎ:) +ƹ,lI“j’`°n"[ ±,lÉÿüuq®[b™ààƒ$,ÇÌ<ØÈ‚à}e£ÞmoÁˆà…5Á`!Ú‰Þú¸¹>vòß^u^y°¸f¢Ì¹ëÐiÓwöq¢’Œ„ »#có 2éeaK6Kh×ÌIºbÕçœËzNÕÃÏSªÌIº¹3:ݼ6ežÝGOÙ³¥»\¢(HûãøÖcw8«¬2shçföÊÜGq‡ê¹§Ð¨ƒåÔ? fʨ ‰‹ßtèŸj”;Ly/Èõä–µ—SÉwàŒy‹¦Ä­8Pa¸ùŸ;›7D¤•r"’ŒY4ÅçÜÖM»’n½ÆÎkÖ•å;ŠÌi󯳺Ÿ>¶ç›¤ÜbríøÆÌ·/,ßCDö¯¾ä|rËוõ;ÔsW 1a9XŽùË‚ÈÞ¹¯,èÍ65«…ô`§¿¿˜RÂyIZÌwgY)‚ÛØ{U“Ñ´ö'Ãc“ ”Tšsõ`µ×ßD¬ÍÕk÷Ÿ¿ŸS¬ä¤,¸ûëÔlReýçYTxuýÚXf‚ÁrÔjuýaìîÚ³“þ>I¾üH™¶°-QDu•‹Dƒ·½©s»3‘ÿÒÅ/V/@ÖO¾ÅÚämG„õlçã*—IˆˆóÊjGY£þˆÚìX’`°3—C02ñKêèMwÂÞéøßÚ’GDƒˆÎTÞ@ôX¤éê–‰âW}~MüKxbm¾>{hÑþï6>È.(W¨Õ¾ÿû÷ùšòd¢DU3ÀýïX–C€…˜¹œoÏ1V°{Oéøù~Ú’½ñ|ä¼­äŒÉý¿9œÇï7ODLéäÙC:zÙ3&uöóœö^±6‰”ŠòÒ ¥k›þSçT7x‹šÓ×OΘÜwà¿™ Ф #˜¯ß; ±›¿³v0 I0ˆbŒiÒ_ÆÃbx†H­4]È}àY…™`°9H‚Àæ ›ƒ$l’`°9¸:XÙ²°%ºw?[¹ºVÛÖª~£jRÁ€qH‚Áúêœ;"逺A elÐèçzt×-¹õÇíŸ=iζË–|wññ´€V.ªœ»'6î¾Þî/a¯X¹&‘1"â¼Ã_Â^Ù·ò?sÿ¾T“/ [²é\Bp@»fNÒ«>çö}¦¿1¢«·}EVBÔ÷û.W0Ã6÷ܨêèÒ㩽Zº¨sîžØtÑoΫ½[ºòÜ;Ç6î½EDÄY×IóÆ>ïçBE¯ìûáDŠpSšùlÍÿHÍš>$ÁÐXNŽöñönÞÜGs7##ódÔiÁšº+"´dpIÔ·_¥ðö!ž1…ÝØ¿1Åéã@Zq†ˆˆg8§ü÷aÍò˜íxý»o"²Ê9u}\»Ë[לËóñÖïw¾ü¯û†mÒýU…Ñ›×$©ü¦ÿuÖ‡¾gn^“ÌÛ„ü9ôUºuˆÚ¾7ÕùÔ·_ÅçÉÚ¼øçy/XuB°©ÏV®Ær€§’`h,J¥òȱ㯇Nwtt,-+;rì¸J¥¬)˜;n9›\Ì¿·ŸèÏDT¸í [<Ó!zWÉg g—?/$ª‘üvVUZüŠœ~ˆN)f¬(j N´V°ÍÊÂ3‹ã0ö×ÊÛ ´¦º“û”·çj¢Fšããåa²ÊÌÉkŒf‘C#*((øéç_&MœðÓ‰_ jµm1cDĘ‚1 1õ‰ƒŠÅï¶¢ÓüNŠ#ËÕú¿çœiÆ/<뵩W(Va׊Uw ¬)¦‘^½Pg¸D4®“víx˜”Tÿ¦®…çºÍî?p¶[Nø ã5–Ò¬À–Χ֡Tz¬>îË¡WŸóv`×CC>0R3ŸóêÓXf‚¡Ñedd¯ ¸&XÀ“-w¥y‰þñDo)„¾;ÿ=ÑûÍw(Ͼwì¿ &j•üÍ®K³^zóãɲòÔëQ?©¹ëVöœKV4}Ìß¿òûû½““S˜ÔÎä6%…ùN®îX™™G9;+«™··âdf6•ª¶m[ÅÅ]ÕÜÅr°9H‚Àæ ›ƒ$l’`°9H‚Àæ ›ƒ$l~1,­…·—µC y’•£¹áåá.—Ë%’ZÿŒ«ZÍ EN^¾R©lÀÀ€hßà¦;ëáììô85s^ÛFìì$rGGo/ÏôŒÌŒ Ë!Àê‘J¥.*.‘ÉxêI0Ø$Á`s4ŒeaK¬¾UÝZk¨Þž"øbÀScYØ’ÏV®6¿ü)¢9¡rµBQV”•–x#êDlzõCÚ½ãœ&…- «ËW#Æ´xù©åÿŒÑ-©ÞÄwà«c{wõõKTÙÉw.ý|üjžæ!Ùóã^ñ\KO;enÚÍ3{ŽÝTˆ…Ç¥Ý^ ñ\O©ºàIâÍØ_½U@ByÀg+W›Ÿh*kB­Û^ˆ&8ÂÅûV}y»úKÙÜkÖÿþ©­¶ ±Ñ0óÐè•>d¸ûf”…=/¨§‚Þ8¿Û°ýjn¨%ùé÷b"Ž^*a/|¼xàöÕßfV¾@¸Ï¼e³.1çɆ›ë¾l£5òR{)Ùª2ZUEQfâÅ}ûÎå2“­éí‹™…†åjEIAƃ‹ûÆä3Ýò§ëµƒ$QýO{MölÔ5ÙµŒÏV®æLâàÚ¼}çèùŸ5Ç‹ .£#@—/Q¿Òc{TÕÅûÊF½Û>f}’~uî0üãù=ïÿtxóžÔ\•½w«n½Ï¥«ÿ!"îúò_&·8¿{óŽÄRçCf~8)é‹Ã…ÂîéÿÑ”®—÷ýp(!³ÜÞ»}þƒÞ [ÿцm¸#š‚ÔÈQ®Ã^ ÌPñ”Iö+W0FDœÓÐÕ] ³lùé µÕ¨ÏMãLîÕ¹ÈÌEC/ýãüµ 7_]4˜þ/VSaØ›~×ÿsó`Ù-Í]óßœ¿”Ä^ãÆÙª2*©K›>Óßxwô¹Õ§L¶&¸#æjË™ƒGû~Óg¿7êN>cDeY÷¯¿¥ÁöYGö¢V}ûÇóïw:¼&‘ˆˆšÍ JùŽ:|¤yÈäh˜<4„sסӦîìãD% vGÆæ³Êi§Í1‚ÚxI+2b¶GÄV&÷²žBGõðó”*s’.GîŒN¯ª¿átü«};6w‘”å$_Þ½+*—ë1~î¨n-Ýå¼äɽ‹‡#Ïgiþt×ü¯yIÊ<»ž8²gKw¹DQöÇñ­ÇîpM_Ž}&NÑÃ×]Rüðâ¾mQ5Æ„·6ÿÍ!ê³ß‡Ÿ¸¢ç®CCfíÜÌ^™û(î¶\¬/Ã85õ[ ›b¸SDÔjôL¡ÁL±N—…-Ùt.!8 ]3'éŠUŸ‹Å¬K0NÁƒ¢7Άî;y)xì8w6=th§f2eC¿Q™œ‰Œ•ÀQ(͹½“†¿Etž•ù¡pñ<÷˜ð|ÆÝg.ܵ¼¬.é)‰½Æ3½•²(ùÂó.‘é$¸¡ðò¼Äs»häÛDç,ÖiƒC V`ø¦v*oü²%²+óÊ7kö©bõê~K—jßõߎEN‚o—Ë–|wéñÔ^-]Ô9ñÇ6ƶœ=­OkWuΈ“j}î4|¿Öàì%Á½øÙ£‡9ç훾Ø9‰DÎmÕˆì…ùg…&‚_=F_" ¯QÈ]ç·V '¢Us]ù&íî3eÔ†ÄÅï :ôÏ 5Ú™äAGo 8&Èéð=‡Ž‘ÓKD[ã9]A³ƒ^8w7)»Œjsöª•:ìE­SìÿÕyé8ɪ*bö<ûÏâK+24Mãq˜ò^ëÉ-k/§’ïÀóM‰[q òŸù­¶|³;•|„Ìûhò•‡D$³hŠÏ¹­›v%)ÜzÿÖ¬+Ëwiê/ô}¸õûÊì[˜³àÝQ«/Ñ‚àÞé{ÖïLÌã®íFÎ Zgøa÷_gu?}lÏ7I¹ÅäÚ%ð™o_X¾!‡ˆd>x¥Å¹mw>,ó|~ô4¢õÚM¤Ý'¾?¥ÃÃÿ>xKxý«ï9ŸÜòuå~u0Õ—aœFvJlpÄS¬S"Zàsg󆈴Rn$f]‚q ½q\c %¸›Ámÿó·ÿ‰K§ý¦¾ÑÁh ‚3¯ÎCH£¹›¸ñÂëЗWû½Õ2æ_„_r¦)‰½Æ3½•K뀪ˆ®CÀuÆ<:ô›NÊKvÚàƒu¼©ýô+_<Ïéð÷¥DDä4ûµœ³~:›è¾95Š^c1×$(ÚÏûsÈ/þ´iM¢ªóÜg<¸*šjyî{¿f\x/–˜wŽÑ2rN<·i‰í…ùg…&#‘h¢~Yó9>û®1ÆèúÞŠ ó›Ó)¶ìûÊÂÞl»å‘N.Ø‚(¡ê¶Þ’5¢Ä­? Xÿ§óŸ}òW½2÷%/ûòÌäÛg÷ý|³¢z}žnÍzim÷ÂH`†»´­hñ;-O¬SÎ{Q±œ³UÕf4„yÌ\z¨ûhHvzåÅƈÒb¾;;6l Ø­©^Y~f\X0ÞKDÓÚŸ\›ÄQÎÕƒ“æmÓÔßñû#ƈ”)1Ûiô;D‰¨ŒÈÁÃÇÛµ$5?éâ¡u‚‘¬^»¿êfÁÝ_ A“4L F¯<ÿ1¢Ì›Ç«3àfC^{{¤,滯£ÒDEÈó,Jg¿Æý}„ñ¾ÄâÜ)±ÁL±N‰èÀÞ«iUOE±˜u Æiä ˜Ip7Cžc'WþžÆQúÅ¢^þû‹FbУyÖ©%ùOO¬ûM“´2Õ©Í9‹g6ïÚ%;|¹ÊD,ö¼5ùR|›$¶Uõ²`žwù«‹æ$ë®Ý2^(XÎùƒC_Óvj81Ôô! †Æ%ö*2|Sûí 2h¾ý7‹ˆ|æ·Rܪ÷z®ñv,~j;+üpüncüNc‹~øù^cüV¿KMµ?wŠÜ 3Ï1ZFê žÛ´Äöœ³BÓ‰HÿCäv³ÝŠw$hžww–̜݆þõXû(cwמô÷IòåGÊ´…Oˆ:Ý&"ƒ”+‹¨#Ñêæ;öXÝxÅݨ}w£ˆÈ©EAS¦}˜{ó‹Ë¤ÓlC©í^ LPòæGÞ‹†r¿”õ º¯¯ÚŒFõ¡Qq.ã\Qõæ\ƹ±_4­ÕºC¶DÕ÷. ÒÞÑ™»D4Xs«3‘ÿÒÅ/2틪úÏËÚÏ XcöšÛÛ¶F†Œôq©H½µíÐ Ëøó¶#Bƒz¶óq•Ë$DĹZSÞh¿am¢ aíèÂz#0µ#Ь±_#Œ÷%§àN‰ ŽØ`ŠuJD7͈Y—`œFŠ™w³æî\!zÑH zO³šw«TÚ÷§º.ý˪=&³I±ï}š|) ¾ÆM¾”·Ò†Á%Ž-{….|¿ï±/¯˜l­AÖsÎì=Û Ù«]»o¢¾9þ±jŧKÿ.x£Q! †Æ%öªxS»µ³$xNþÕ]êü¦{éö[úïBºoÇFNbg…TÍÂVDDiU·“i­í¹S”Ð^˜yŽÑ2R_ðܦ%¶æœšÖk<•ìÐ-á¼ëër²sé²ê’PþE¼ÎŸêèMwÂÞéøßÚ’#…4¯ݾ-ÐÅÑRz½ ÝÑN±vO¥{LGVòäÖÉÓ/$2–kÖG­ö¢Öo¿`·x$ý¾"¿ÆëËüÑÐ=4‰DÈ¢ªLôÐh”ŒÏ?"Dt¦²lQõŸ=‰ÎVÞìG”¬ )~Õç×ÌžccÏÜqžsæÑiì‡3çºNš_´â\»¼äõÙC‹ö·ñAvA¹B­öýß¿Ï×”? ¬ùº¦Ýÿ:4ããwßVlÜxFô·Á“kÄß_[.Ö—`œFŽØ`ŠuJDºklÄbÖ%§ØAÑg½»&%õ#:_y¯ñÌÄø¢Ñç¹¹12ç¥dø7ç¥d¸UuØê²Ô+{éåw‰®˜ÙZ=1ÆyOm<þé’ù¾+·¤×{‘˜6ß5¼Ñ¨CSÁXÊ–‡÷Ðr☳Aà£"#§Æº©í¹SìýZp/Ì<Çh©/xn3¹õ9+XçÌÞ½y‡ƒ&ŒV]øwz4­ïYÆÚϾ-Ò8½¹øÏ}i…Î7Á+ؽ§tÙ|?mIÆ·ÔŸ¼3þÇ£1¤æ)í=Û¿ }(uóMûÞܵÿ·%.N µ¿¾6Mlúç/¯EÇ^½™”UfçÕe`)Ï VkµÚ‹ÚÆXõù Òâ™ ÁCy]õéüa^ºŸ«yvôêPÕõ/k»¿ÆÿÛÏ?7àþÞË©ä;(t8ÿJûШ9}"ãÒ©EÿiüVe¿1¥a³‡ûýA®ÚÉ·s¿¡c¢#×iÿ“iÃŽ¹|'S©VUÿ©™I4Љ](­¼ëL”§(/­PÚ¹¶é4S[-òªrÉüÁ#/&•y=?zÚÍã4åLùÇž©C>~û]Éæõ§…'Ô#nñO«â0½zJU¬/Á8±ÁëÔ̘uV Æ)vPôÆYï®I·ù§³ú>Ø—ÎZôŸ:J[.ƒÅV™óÆbø7ç¥d¸•—8úö !ºn~k ‚ñ«áãßð§Ü1]¹iB MHNøcöÉø‰Ä’¿Ê5þi”‘ScÝÔöÜiäýÚp/Ì<Çh©/xn3¹Ö=+˜iYØΕŠÒ¢¬´ûç·¬¹PRù}vÎKˆhäxöð‹BÝgEñö‡ì¯ÃéJÍ/&ßÛt:øoÚó!+=õÕ¶òic&¿5ÎÍ‘Ud%Ÿ?TyXYÁ‘/Œíå…ÃÜíTù©7¬9Z ú”Û|U9604È×Uª*ÍM½ûãÚózËà´fHÍÞ #ÕŠñÑ<4DT~ø¿G&¿4wˆ§“²$7åîÿ>Z¡{5Sm F.Gj\yäú“!!¡‹F;QiæýÓk#˵íoIí6õ½ /©"û~ÔÚ•ýªN~½;hê¸9C½%å¹)×ÎÓQ+<Ñuòô÷CÜ%åù©Wö‡k wÆ=Yøáâ—¤Láæ#7LXð7©²8ûÁ•mD•”*Ž®;:izð;#ÝXáà ‘ºŸê0U|Ä¿öMùhÁ{’ïÖ¸äHùoNMýÑ8™"÷Q\8µyCS.Ö—`œFŽØ`ŠujfÌ&ÇSì è³Þ]“Ê÷mˆš1cöÇãdŠÜGWwñvSë6V ËÜ7–š¯qã/%±­Hç2Þ%9Ég7ÿ¬yù›Ûš)æ¼`Ó·¤Ú4˜VÇÒÓ¹&˜ùûw×Ü 蜜¤v&·))ÌwruoäÀÀÊÌ<ÊÙYYͼ½Å5rµmÁ‹¢sÎBÂ>íA·W¬<À«.Î ¸ çîÃBBuòv¢ÒÌûwGœÏ«Y_¯eÝrÁÛœKºMݳ]ÕÛô‘7 Œ4èþò› _há,ô~m¸ì…WŒêæ[yŽ9ÜmÄ|ãã`¤~Õe¡Ù÷cvìÑÛk±½ðê3~òçZ»KÊóSÿ8¹ýèmýq°°Þ^O²D?)ÖâLêÕcΓ¯/_ÝXkà)ÕÔþ„ëâ|IJ0Ïå«„/ÜFxÂX•î~›V~RÒêÜ”‘Íg#Z\©jÛ¶U\ÜUÍ]$Á ¬A’`Af&Áÿ³ôÓŠ‚ÇWöüðsF}œÁ39 Ñã{ìûívšÒo@ÈÜqÅ_,4ö½L°–¦œc94ãßw0iùªQÝ>߯³§è×}Å<»ð,Ááh<Å/L}sB3EvÒ¹õŠÆ»†7<«0 Â0 ÇÌ™`xÚ5å™`IC°;;;•yWG1’`ht*•ÚÎÎôŠCvvv>ͼ ‹ŠLW­ ¬ €F—›—ïׇÕ~õ¶ZÍ ‹ŠŠŠ‹6$Á`-¼½¬XTiYÙãÔtkGQ I0X¾V‡5Á`s€ÍA 6I0Ø$Á`spu°47wwWG‡º]5l‡Z­®P(ŠKJ3³²¶e$Á`i>>ÍróòSÒž¨Õ ü˜Ð”ùwéï~­6‘H$Nr¹w3Ïr—‚ýÑ8,‡K“I¥¹yùÈ€À$µZ]T\œšöÄÇ»YöŒ$¬€sníà©Q¡PÈd ¼~Ë! i³³—t-uk¡ù¹yÎÕªü Õ½S¤*·vdðÃL04i̧³s‹¶Âݼ\=¼Ý=¼=\Zµg>¬<Ý@“&‘9Ø9ȼÆ/’Èd2G™£ƒ½ÜQâèÜà- [ÒàmZ˜ý¸ÿ÷?ó:6^ûÖ¢¦p€º¼±x陵£€†åÐ- [òÙÊÕÖŽ¬‰ËUîí$vvâ¥÷gŒ¤r§/0ÆÔœKˆÏ”¯R±ŸbŸd׸Κ6UR+JòÓïÅD½T»°ÀÓ̽ûˆ1ƒžïìën§,ÌJºy*2êžZ?½ÜŽóœå«66śߧ«¯‡½"+þüÈXýKqÞñ½~¥ûWÞ'ƪJ&, S/_u¼ž{a q IDAT²sF²Á4ÒªÏA¹ûÝáò°·Úÿü߇LøIO$ÁЈšT:b² «dÞH÷Å(Û”z4·“ÚI¥R‰ÔN"‘0;I…B)‘H”J•ÄŽ¿ú’šˆžï^ü×/Ýô¶Õ )“{uî2sÑÐKÿ8o… "¢·ú;ÿv~ÿOž”Ùû<7læÌ³—ÿû†^Ý'÷[°lÀVÃv¸tТ…ÏÇ<°îΓ §öƽF±kõ+õŸîúhÍMÝälÔ tïóØ Sí˜õn¨`ê­>…±[ëS&}Ò‡VÄY.`h$H‚¡q5t¹æÓEêåëèä(‘T.ÙâŒÔœs•š©¹R©r¢<®âĨS+ѯÇñÒœ»Ñ;iø[œ'6õðÊ5‰•_­süSØûÍ™=Uý¦}n´6åÕ¾›»HÊr’/ïÞ•«©ï:tÚôÁ}œ¨$#áÂîÈØ|Æ4Ûn8oX_×Ûª&>+2®ß:¹ïDúùVuÀœ^œëvðó 2˜eôZ8Ê!bõ/wQÁÃß" 2`¢ncYÚ?K‰ª·í?”Rÿ1½¿µ´wYñsGuké.ç%Oî]<y>Kóè¦s Áíš9IW¬ú\·"j5zfp@/iEfBÌöˆØÂª¡Ð4hdLtƒÙó@·‘jcÉTσR´=ƒý¿N—(¶ ÔÊü7æ¶nÕJ¯ðÛ·#öh쮑ƒ%4…tÄøyZÓ»n œËzNÕÃÏSªÌIº¹3:½ª/ÝÁðŒ^(TM°µ:wÚøGÌú˜„1&Ñ\G+Êyiçj""N<ïñÇ£¯©ÕœˆÔJÑ‹ 3¯ÎCHÃØãõ÷œþ_ ­8CDD¡>e»?ûê±á4üB߇[¿?ü¨È¾å€9 Þµú"9Ly/Èõä–µ—SÉwàŒy‹¦Ä­8Pa¤¾¨VIyÖX…æo ²e¹Ðç쓽è´Ïô?MêàmW‘“|íÈÎSIÕ^dô“¢:æ¼ËxÊýR¡›ë[Ü;}Ïú‰yܵ]ÀÈDë*Ë}îlÞ‘VÊ Û™ß:aË7»SÉw@ȼ&_YqX¡×¦à˜˜jÄz¨Eq’Øh"$Á ãÐá£o/œ/•Vg¤%%%??a®ñÅ8°æàÕ%ð5m:òz`Õš³ÝÊÕDôÙÊÕº'¼…¾}¿fÅçÿÝç1üݕէ¼äz}÷Úÿ[¹6âšÛ‹‹¦8¯/ư²a ’1‹¦øÜ=¼é_Ëÿ~¶dà[³\µ›/ð¹s`×Údt~ë„ß|±âë=Wœ_üh²½`5ÁÖêÜiSø†PcS«Ô*•J©P*ÒB=Ö­²aõ°oWÛ¼zøæÍsŽõíZ¦Vrµ’«Ið²°%Ë–„}4çåŽi'ÖýFDe{~U Ÿ)眈d3ýŠ~x$ØéŽˆß(I]’³ØMaHvúû‹)%œ—¤Å|w–õ˜b¼¾ î5ì7@ÿl×>c[^åjNœ.ÀÕóˆhÑ™Ê{ôæ[M’÷šòþDÏß¿[w*ÍXG¶ºóõC±µ 5&IDêœ"#£MJ""Î^wâûR*ëVpîÂyQå^·¯nèñùƒ;ÎsÎ<:ýpæÜC7«Ð^_¢F;D4¨jõ@?¢d#{¤ef#–<@õ:(²‰Ÿ6Ù˜O­VkETTTXf!„’`°k¥#ÆéÅH¿êók‚!Õ,M tª‰µVçNŸyÒ§ÕyIìü*…šˆ©Uês×å¥åÌAÆ+¬¸”©Ô}ÝÜ ³ŠCGÔ‹§Ó£mÅš´&“h »PjbýñüÓyî|J¾ƒB‡óø¯Ìß ŸásÞ¢øeíæØ‚‡Ooµ+wŸùX^&ZçÆþ‚¥“ŸÛtâv65ëùR(eþ`Ø×<ôugÚZBD$›îE?ß­zÎDsš7ÈoË¥t…Kë€q¡ÚM>™6ìØ™Ëw2•j•ÈÒêšíѨ9}"ãÒ©EÿiüÖ—f‚yXìÕç ‘óë-ø‰cK­ ö4‹"òòò-³BI0XUÒ“ôbˆˆ) ›=¤ðØïrÕN¾û )ð•|2/-k­nÚÂ…Õ˜²ŒeÞ–U(˜JÉ™„¸š ö/&"•BõëU·½§ÝkÛæ•ï2_y»`ede³3îÉ¿$eƳXæÕç pÞãÝÖ…ÛÑ4¼˜Ø ¦+5($Á`–OGLÒ‹AuòëÝASÇÍêí,)ÏM¹vFàê­æ¤b­Õ¹SÛQVÎT*‰©Uüãõ~ÿz;µ¬\ô¸{ ù¹ÐÍÍꪔì»/‰n¨½ËXþo‘›Ó>Pµ¹X}…Œ–Ç}ùyœAV¥[‡1º{bûÝ:÷ 0öð›ËŽŸÎëtskâ+éÚîêDåžÛ±þ\uÕs•-ç\9þýý_¯ÐíW¯ÍCÿ=©†vÁ1Ñk„ˆRNî2l„ÈB¨>¥Ë“/ùÀÆ>–xV1ÿîš[½““S˜Ôô»’Â|'×ZÏÁÀÓÅÌ£œ•ÕÌÛ»íóž-ë¸fù¡:lÚ´ØÂ¤lƒóïÒ)þÞ}3+{ºª>z9ÅÛ¹LsÉ`N”Qì´æH˼¢Ú}¯—3ßÐ%3âW®^m–"ö’Á°µzÿ7[3³®TµmÛ*.îªæ.f‚¡±ˆ]Æë³•«9ó Ä­4½ ·ÐnÙ«Ôʲ°%œ+Ÿœþ7¬¦ ,I04±ùQœíÀò0[ßt˜»n ‘! K{ÆÎvÏØî4AvvvjµèO„Ö ~,,M­æö&®å  ÕÌ˳¼¼¢aÛD –V¡¨hîÓL"Áû˜ ‘H¼›yy¸¹fç˜øÌÚÂr°´ÔôŒömZuíÔÁÚ€¥ùwéTÛMT*UzFfQCÿŽ’`°´ŠŠŠ»÷X; °iø8l’`°9H‚Àæ ›ƒ$l’`°9H‚Àæà:Á`iu¸R:<¥âïÝ×Üðòp—Ëå «m j5W(9yùJ¥²C V”’jí ÑµkÕR{ÛÙÙéqj:ç¼¶ØÙI䎎Þ^žé™ –C€%Ô!&"•J]T\"“5ðÔ-’`°9X5iÂxãÿx¼±c@ –ößu߈=ôþ{²@XhYØ’zVh –ïT¯G«ì5@SöãÑ#?=bÉ1 `iŸ­\mí¬Lwº7îÚu͡ÆôzÁ2 KX¶dÃéøWûvlî")ËI¾¼{WT.ÓLˆjþ×ä…œËzNÕÃÏSªÌIº¹3:1MMç‚Ú5s’®Xõù²°%›c´ñ’Vd&Älˆ-ª&ØZ;ÕÝÎ]‡N›>¸³•d$\Ø›_µ‰a`†=. [RÕµh;†ÃED¬Çø¹£ºµt—ó’'÷.Ž<Ÿe©ÐÄ–ü>zœòèqŠeb@ ²Ð÷áÖï?*²o9`΂wF­¾øÙÊÕÚtPC2fÑŸs[7íJR¸õ;ÿ­YW–ï(Ò<´ÀçÎæ i¥•—V™ß:aË7»SÉw@ȼ&_YqXaXM°µútªå0å½ ×“[Ö^N%ß3æ-š·â@…X`†=šÓŽápÑ‚àÞé{ÖïLÌã®íFÎ ZG:)5ÀÓÅeêÒ²ßê–HÛ÷ÒÜpì;Qs£hߪFêk‚ÁBvDüþ¨@Iê’”˜íÄFÖ™6ÐþdxlR’Js®Œ öS´Ø{U7 ¿˜RÂyIZløÖ3X°š‘ÖêÖ©VHvúûÊb¾;ËzTo"˜ #íW‘ƒ‡·«½"?éâ¡uÆx*¬ýËüµ™ox»±a&,ä«ü™DÆŠ³¬Ó™ÈéâYUUªN@oÖ¬y¡úæ%¢Á‚ÕŒ´V·NµÚETß»H4Èd`µmGp¸¶m éãR‘z3jÛ¡œ°Âžfʇ×>üä/‹ÿß_|}}‰(--ý_|¡¬P=¼FD.S—6^×H‚Áš8çÄ9U%|‰Dñ«>¿Æ„~U¼fá@¢³•7û% Vk­Îj="Dt¦òÞ¢ÇÆÓëÑœv±Ççî8Ï9óè4öÙsÝ7^ éSÊ=¶ì>ð×?- ¢Í»÷)åž²–~Ä$ŠqÚ/–C€5e tªN #bJ'ÏÒÑËž1©³Ÿà´Ä65§¯Ÿœ1¹ï€9üÖÁ:b­Õ¹S­½ñ|ä¼­äŒÉý¿9œÇï7˜^æ´#è“iÃü›;J$vj•Z[ˆ ®ÀSMâê“ú$#âèO;MÏÈ’¸ú³D‚Š™`°¦qO~¸ø%)Ó|¦¯:ùõî ©ãæ õv–”ç¦\;³UlÃ-©Ý¦¾ä%UdßZ{ BpÊV¬µ:wªU¹þdHHè¢ÑNTšyÿôÚÈrm‚éõhN;‚Â]'O?Ä]RžŸze¸É8š>&³gNî1·î‘ÄÙ“É„×L6|¿þþÝ5·z''§0©ÉmJ ó\Ý90°23rvVV3oo Ä£«É^¡ÉÖÔøwé””’jí( ÑµkÕ2þÞ}Íí6­ü¥¤é>ê2uiѾU²‚Û*Äi*ˆm®ef6•ª¶m[ÅÅ]ÕÜÅL04íÇôÈ @Lc¯ýƒ$ r_0®Q¯ÿ`’`xú4ÙôºÉÐ5Þa˜W‡€&ÍÎÎN¥se¤$J¥¶³3}Cvvv>ͼ ‹Š6,‡+hת¥µC‹ÊÍË÷káÃŒ^ TZÍ ‹ŠŠŠ‹6$Á`iÚËå€í(-+{œšní(ªa9Ø$Á`s€ÍA 6I0Ø\,Í¿K'k‡¢½"—‡»\.—Hêr‰4…B‘“—¯T*00$Á`I)©ÖîU᧦sÎkÛˆDîèèí噞‘Ù€±a9XB2`"R©ÔEÅ%2YOÝ" ›ƒ$l’`x , [òŒuÖ…$˨¥‹§I*—þpîôç¥:W­â’Wÿgi`=ÛoÔ„µAGJ Ðd! †Ærê4õ˜!«º7Á…9¿RuG6£;:]Ïö?[¹ºž-X«q°:\" ͹<ì5{¾µ‚1êÓ™—ñ®}ˆâˆsû9ù±DŒ8w:múàÎ>NT’‘pawdl>cÿ¿½{‹ªÚÿ?þYà WQð†w‰¬$Ë{©išÇÒ41-5³ìtµ:ç|¿çgÙñ›šsì[}³²ì¦•¤©y=–§£fš·L´È¼… Þ¹Ã̬߃0À Œ:€º_ϳ׬½ögÏž½Y¬Ù#"S&¿0gÓÁ¡ñ1æi¯¼Z:¤nÔó‘ñÝí?™»éÔ”É/8¢ê”É/¼÷ÝÞ{;µjXÏTp6uÇ‚/Ög*Ñ:¤çˆ‘=ZGX¬™iIË[v{¸r´Õ:¸G¨m"|­™iIËKÛw®DkË ƒFö‰‹3[ÏÙ±8qÃqå8–ÿÍw¸=.*Ô”{xû’O×wL;þu UíɆøä¨üÓßL¹0æ““Ÿ^;}æ~uÑ·TU#£¦(µs…}Àø`y/GÂúÊÙ9ç •¤ó<¶©måÇJ‰ˆß°§ú¯ýhÖŽt‰êrÿ¸‰Ã’¦--rìþhä¾ß[”‘_v/óuw?=¬åáeo,ûµ¸Â±&Dž÷ÉŠ´߯Ç>úD—õ3¶‹ˆßÐ'ûúþûƒÿK:.n¹ïá–®Šô½÷©~Ak?z«¤—}œ+1Ý9qXä¦ys¾8RrSÿG½sêü± zæžF›>}?ñpAX‡¾ÃEf¿<}Fi’vðädýFLúï^2í{Ç## ìWªÂ8àò±5h×2kÇ´–{ý¬K²–Zýî‘ðq‘Å_%;:$Ä©ï>Ù~,O뼌-oTqÃJ÷]úå.çÑýÿ±ûã·*'`™¿è§´óV±çÛò¹¨ÛK¿^­ýü§Œ|­óoÿl½Ë :¨õsË pÙǹ’á]|×ÎÝzä¼UòÏîZ¶HZ”<<Þ¼á£Í‡Ï[¥èTò׳]˃“-XøÛm£´˨èœÏÒ\.!5iÏââúc¢¤ksY›¡Ž+Í{èȇC‹ì+y¾¹ÈÖ²ÞÛEš–n$—©_ÏÙ¶|}†ë…‡.,P*G)ßÒÁ,ë²ÓåŽ1"ÛÊà‚s%mDîxqÒK/NúÛ‹“¦L%RòE8-E¶¸ÜÙ‰''«ÔŸœmùǦ¢µz iæÇ'DX  ¸¦ýý•iîÔ(–C )uh~a½±7ß&{þ)¢¶ÿ&/Þ<ÚœŸXšYÓDºŠ|_Ò½³ÈQç‡ZðúòûŸâÅï¿ÿýYž*r‹Èæ’­›Ýõé"raøVw§Qú0Edï+¯î®´H÷H7‘ åµÖ¢µ\äɦ´'äOåŸ1-N~zŽÕÀ€kÝÿ{ñ%wj3Á¨YGæç²œY)""«ÎXäÌO/}ö˽º÷¸ÎM” ˆî6þ6½÷+wã(ëž…¯/=ÓùOôŽôðЋ~ÓýFwŠöW* ªó˜>®ûüªûŒí T@T—q·W?æ–ü!cº· ÷UÊÛkø3ŽöÅ»¬½ébß>îh<%Ò%°,Åzx²ªhùJ{¿CÛ¦}šëháVkx3Á¨a韟×ã‹(‘Â¥E:á“ã"’aáâÙkFNì(ù§~ÿnÖâBq?÷©l{½¾dØs>eúøu'ª=rá’÷Ößÿ˜çXŠ3Óv}¡cîsÑgé»ëFŒóÜKqfZÒ\iöpÕcÚÖ¾µ ß}Æöhd*Ì<¶ûûyŽöâUï¬sútDƒµPÏeÒúö)“æ¾²¼ú®W}ÃsSZ½9õê(öRĶm}äXzõýW¹˜&÷øÝñ¸Y“è´c—>>6›Ý»c‚Pãl6»Oõ7`¨ÌÇÇ'2"<;'Ç»õ°@ˆiÒ¸®KÔªÌsYÑ"ÕůN´ÛuvNNNn®wë!¨m¥·ËG~AÁÑôãu]E–CÀpÁ0B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ¸‚X,–>½{ÝxC‡º.\ãÁ¨%þþ¡õÍJ‹­ØÝO¿;zßp}\tTT] ®qæº.Fa1›Zµje6»~×5nÜ8&&¦¨¨èÛÿ|+¢j¹<`(Ì£–ädgGEEi­‹] jÞ¼¹ˆ$&&f¤§_ΦL~á¢Ú€1ŒZ¢µÖZ»|Ê××766V)µnݺ½{÷†„†ºDOž/»¦N_-êJ™-ž2ù…—§Ï¨«Ý½8ò%TRú«…½8/ëø-‹Vý˜§n|~R—Ïg|pªäéÈqSFÿ¨‚†TÞýåé3*Ô¹ÅÒaÀƒ·_ß8ÌÇš™‘üýÂÕÉÅ¥}tẩÿÜRú6(ÝËeI%•Tzÿ¸üÕ¨Úª×B0j#ÇÄÄ˜Íæ””ǦR*..Î××÷àÁƒkÖ¬‰OMKs;„yx¼ìøQn¹ß¼z¡­Ö GUÑP„·¹5aÔÄ?þ}óî÷’ïØMþ±ÕÑ¡çøèŸÿ/yYÁ¯ŽMÏÓ¤þÃ_†4Ú¼àÃù)ùA-»$Œzvð‘™+²Kí’‚>O´Ø2ûˆG%•kÑî;&ó+í?!êðêOÞœöêÛó“êßöD¹_ž>£4šîœ8,rÿŠ9¯Oý¿¹óº<6:ø¢v¯àÑÈ}Kß{mÚ+¯VQ˜ËöÊ#?:´ã™5s_ûû«¯}²65òþ ªÜ¿Š×¡2åÞ¶×bÛâØLy[Ó ñ"rËc·¼¨Š«0(@VpÚ>¸Zï*;¢uý{)qOuu¯+”äîý0&f‚Q{7))Éb±ôïß¿C‡J)Y¶lÙÉ“'»wï®”ª"Kñ‘EKv+¥äç/‹=ÒPÞ<%"’ÐA­Ÿ¾ý˜R"[>Þ8à¥ÛÝݵ;oÞ0}óa¥DN%=»dÇ8õÓŽý'“¥ OÍ_ôSšR"Öc[>—¾‹lw1fßµÓ·QJäì®e‹†L&ò©ç»W°ôË]f:ÝVEÁÎ DüêG6ÎKÏ:²}ù;ÕÚÃa“Çö⼬)kÞùÁqge[÷áÙI£¶k{fîT[5ÓÀî>³)’R®áH¹…Åg—LßlëGiå'ƒ]–äîýs±U® „`Ô»Ý."mÚ´Y·n]Ë–-Û¶m+";wîܺukçÎ-‹£ƒ;1cBrçtšý‰y£Æ4“׊HŒÈâ²^ÛEJ®»vg-E¾ªÔØ\dQ¹»–nº¶”ÊQÊ×emDb_œt‡*Íee±Þ“Ý+Hö °* vöé¼ C{õJèY¯(=yý§Ë©f*ÔÃaËÏa—…ÑŒOÖµ{±ï·¯,¬ö†w•?‚æxpZ¤•Ⱦ²gZŠ”Ë­JퟵqðKƒ¦®,¨¶$w­ pm £ö8fyÃÃÃcbbŸ{‚‚%K–´oß>$$¤tX¹ÊLZ·{0@|Ç¿8¥¬ed¬ž¹W©T‘."Kšo-ÝÅ]»³C"ÝD6”oLé*ò}ÉVg‘ª¢RÉyi]z›‚‘½¯¼ºÛã›WTؽ"§vw…¹k¯0²:ºyÙüÍZ«ú­û?;ê¡å¿Ì­º’‹}*®·‰ôݬ/ý&«òåÁ¶²ïà…í6%a…>ö söM~üÖoT=TïŸjÀaÊIDATK.pUcM0j‰vÒ¢E ‹Å2mÚ´™3g†‡‡7mÚÔùY×ó“†YNÎr¬Xuü¼–aIè$"²èWÝgl§è¥¢ºŒ+›îu×îlñ.k¯GºÅ„˜Ä·A‡;¿Ü«{ëÜ$@©€ènãoÓ{+O—sJ¤K`Y–Z´%Șî­Â}•2EÇöþÌEí^w…¹k¯0òŸ†÷Œmèo2ùØm®gÜ+ô¿Ø×ÁëÒ?Lö½ÿá^­BÌÊ\¿U #}þ0£B¥Î/X˜?ð‘èjÆrÿþ3Á¨%&“Én·;Ö(¥:tè°}ûvŸöíÛ;ç^»Ýn2¹H„½ªÃ3³ÿ°žûùaõ_·ÉÎM…Kß]7bä˜çXŠ3Ó’æJ³‡ܵ;+^õΪÁ#†>Þ;De޶رö pñìµ #'ö ”üS¿7kqaÕ÷$NL:1áÙIw™•ãè¶µo-èw߀±=™ 3íþ~^Õ¯L…Ý«à®0wíFž›yÙYÁn¿Îׯò™Ó§#4ðdÀ ‹Å¦Ü)­uzzz±Í^lå&ÀÀ#¦mµ5oÞ$)i—c“™`Ô‹¯_~^îÉ“'«º šˆŸŸ¿@`±5¿6kFCF-ÉÊήW/¤~XP×"ç³sj¯,`H„`Ô«Õ–y.«®«áî0 B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ¸‚X,–>½{ÝxC‡º.\ãÁ¨%þþ¡õÍJ‹­ØÝO¿;zßp}\tTT] ®qæº.Fa1›Zµje6»~×5nÜ8&&¦¨¨èÛÿ|+¢j¹<`(Ì£–ädgGEEi­‹] jÞ¼¹ˆ$&&f¤§_ΦL~á¢Ú€1ŒZ¢µÖZ»|Ê××766V)µnݺ½{÷†„†ºDOž/»¦N_-ŠÙâr¦L~áåé3j®¿8ý"a/ÎË:~`Ë¢U?機ŸÔåóœ*¹:rÜ”Ñ?ª !•wyúŒÊun±tðàí×7ó±ff$¿purqi]¸nê?·”^ôÒ½\–TRI¥w‹Ë_„ª­ pM"£ö8BpLLŒÙlNIIql*¥âââ|}}<¸fÍšøøøÔ´4·C˜‡ÇËŽå–ûÍ«Új­p”qDCÞæÖ„Q{üø÷Í»ßK¾wb7ùÇVG‡žã£þ¿äe¿:6=O“:øÒhó‚ç§äµì’0êÙÁGf®È.I´K ú<ÑbËì#•TòD¥wKi%d\!µ¤t&8"""$$Än·§¤¤ˆHëÖ­CCC³²²æÏŸß¢E‹FU‚ýF·•ïf¬V·Lí'óŠ.ŒÜ#aT6¾ÖÌ´¤åNGtÝîLÅ |¨OûÆ¡:ïÄí+o>]²ãðÝÚDJÞɃÛ,ÞšuaÑ999OFÎÙtph|LD yÚ+¯jíóÝ#n‹ 5åÞ¾äÓõÇEDkË ƒFö‰‹3[ÏÙ±8qÃñJ3Ù•Æq]†Ëvǧãß’PèêÔœU¡¿»Ã¹¾šùg÷oH”ÛÙ¬òW~–=i\è–¹YJ‡Žî›ýÅÔ‚K™¤o<á&ëÂ~W"r.eÓœ·Ly4jÅ'Ï&ÏÞ:ä¯÷M_–ë¦*ç’-.ß-8‚Q{!xãÆŒ‰‰ÉÌÌ´X,Íš5³ÛíóæÍóóókÛ¶­»%"¢uð#MmËçŠÈ¿l½ Ös²•ß{Ÿê´ö£·v¤KT—ûǵ¼Ðß]»³G‡v<¾pvbÊ9ßû~‘wDÄoØSý‚×~4«dljÒ¦-­&B=¹ïÃ÷eäk± zæžF›>}?ñpAX‡¾ÃEf‹ˆéΉÃ"7Í›óÅ‘â›ú?òØèSççT=Ž»2\¶Wþ›¾ËS+U¹ÿEµò oÓ%Al[›)ïo{ðùxym×-5Þòú¡Kû\ã YqÀi׃«%ð.‘y%G´®/eÒS]—ÿs›G%¹{·à@Fíqܤ¤$‹ÅÒ¿ÿ:(¥DdÙ²e'OžìÞ½»RªŠ, ÇF-Ù­”’Ÿ¿,ôHCyó”ˆHBµ~úöcJ‰dlùx〗nwtw×î¬@į~dƒà¼ô¬#Û——ÄÄ„8õÓŽý'“¥ ª>µ¥_îʸ±†Ç›7Lß|X)‘SÉ_Ï.iìâ»vúÖ#J‰œÝµlÑÉÃD>­zwexXžËS«‚‡Ã:&íÅyY'RÖ¼óƒ#´*ÛºÏNÕ°]Û3s§ÚªÉšî>¡)’R®áH¹…Åg—LßlëGiåã¬Ë’ܽ[.¶*ÀµŠŒÚc·ÛE¤M›6ëÖ­kÙ²eÛ¶mEdçÎ[·níܹ³Åbqtp'fLHîüƒŽˆ³?1oÔ˜fòúQ‰Y\Ök»HIØu×îìÓy†öê•Ð;²^QzòúO—ÿ¢E¤¹È¢r;v­öÔ’·ùªR‡6"±/NºC•Æ7×YßywexXžËS«‚‡Ã–_J[F3>Y×îž߾²°ÚiàÊAs<8-ÒJd_Ù3-EÊåV¥öÏÚ8ø¥ÁSWT[’»wËÅV¸V‚Q{³¼ááá111‰‰‰Ï=÷\AAÁ’%KÚ·oR:¬\¥(­Û= ¾ã_œRÖ22VÏÜ«TªH‘%Í·–îâ®Ý™:ºyÙüÍZ«ú­û?;ê¡å¿Ì‘4‘®"ß—té,RžŠ´®§uNÉÂÜå*«ùH7‘ å”"²÷•WwWûy§îÊp×®µ­KGpyjÎ*ôw7¬‡”Þ&Òw³¾ô%«òåÁ¶²ïà…í6%a…>ö söM~üÖoT=Tï–K.pá>Á¨%ÚI‹-,Ë´iÓfΜÞ´iSçg]ÏXvf99ëåé3J^˰$tYô«î3¶St€RQ]Æ•M÷ºkwö§á=cú›L>v[Ù$ô—{uïq›(ÝmümzoÉÄî-ãºF˜ÄÒ´óˆ‘îÎtñ.k¯GºÅ„˜Ä·A‡—³%Șî­Â}•2EÇöþLµ¯˜»2ܵŸéXò\žš³ ýÝ [kÒ?Lö½ÿá^­BÌÊ\¿U #}þ0£B¥Î/X˜?ð‘èjÆrÿnÀ™`Ô“Éd·ÛË”R:tؾ}»Oûöís¯Ýn7™\L×õ¨ÏÌvþS{îç‡ÕÝ&;7.}w݈‘cž`)ÎLKš+Ívtp×îlnJðO'„š ³Òw~5·dÇų×&$ŒœØ7PòOýþݬŅŽéÒfol9räŸúùÛsO§ìøHÚOpy¦Å«ÞY5xÄÐÇ{‡¨ìÃÛ;V8ØÖ¾µ ß}Æöhd*Ì<¶ûûyÕ¾bîÊpמ˜tb³“î2+Ç_ö]žš³ ýÝ [kÔù•¯­øÀ&ô õ±e¥'¯|sÕyW˜óÝпö©r¨*Þ-^.pÕR±±×9ÅÇwLM=¦Ì>Õî“—ìöë pmðð*Ÿ9}:¢AO °X,aaaÊ}´ÒZ§§§ÛìÅVn <âaÑV[óæM’’v96™ F-±øúåçåžÛÅ t¼ˆŒZbµÚ2ÏeÕu"ÜD€á‚`8„`!†C€á‚`8„`_–Ëuæôéº.àâ‚qY"4¨ë.Ë!`8„`!†C€á‚`8„`!†C€á‚`8„`!†C€á‚`8„`!†C€á‚`8„`!†C€á‚`8„`!†CÆGÝœðô_þú·'ykÀ)“_ðÖPµ62¨Qæº.†Pµ½8/óØÞï¿Xõ«v×ùþmö½;óÛLkmU ‡ŒËÕûöÛ6lܨÝfÚ/OŸ!"bòhÓç©„GWýú»žmEž-¥¼Z&@B0.W|Ç›êÕ«÷Í¿¿µZ«Ÿ»Õ¢E‹ÈÉ’MÕnð¸þ¢ëIÎÑK>[sL9æŒÿöÒ‹"òòôÚ÷æßÞ®oÑéƒë?Y²£H‰È”É/ÌÙtph|LD yÚ+¯V¤Š¦L~AkmË?½ï›ï)i) èN§L~áãíG‡Ç7©g;»Íû Ñæï{¸Ïu ÍJ•îÒ¬ÿØ ÝÀ• /hÛ¦u½zAËWþ+??ß]Ÿ²úüî·g‰(iþÔ}Aë>øß½ç,Íîøó¸»Ö¼²æåé3œSi»§Äì˜÷æ¦sá·?öðÓmv¼þ»£}ŒÿÏ¿»èt¡v9H¥¾<}†Ö¦ &ÝÿòðÙ³¼ŠžCóÖð¿Çt‹„?ß?L~ùJDZ?3¨õÎÏÞø(-Ï®ªè®|„`xGtTÔ]ýû-]¾Ò]G®Õ&ÿÆ7œ0¡Óò×vŠÈ}¡:ì1Ç'à´¶‹T̯÷ÈgŽå*•³þ é1Bd–£ý“¯;}a½Dµƒ”¹þãïˆñ3+¥µ]¤ªüÑÆÔ\¥ô¯Dþìh(Ÿ—–W~FånD†×¯â(à’:{î’÷%Ã;Ò3ŽóïÿTÛMÙ Òw~)xBd§£å‹i¯ì¿øå¿§Êïâá ãî½éØçï|žv¾ÐVÿ^zÒÑh×Ú¢u±RZ:wÎUJD”*Vªì&*•W>»ìV®ÔËøïÔn‘/8pð÷%K—U±¢”6ùGuLùÙ±¹ä¬$Ü{}?e nÙ#á™ÊýWåËè^ƒLM{”üÕ.ǬvRA"ÚZXln×w\iã>‘mƒ”9(¦ÇCU¿èØ«‚j)õÁ†#OŽþSñËb‘±ŽÆ/WîzèÄ—L9Gw~%2®ŠÝŸµúð# ¾-ÀÇéƒqàj¤bc¯s<Šï˜šzL™}ªÝ'/;+08´† Cã*€k‰¶Úš7o’”´Ë±ÉL0jœ»¯Uc2ÔB0ja\iø` ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ0B0 Ç|i»åegy· Ö\J õz@­a9 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÁ0B0 ‡ Ã!ÀpÌu]׳Å?(4"ÀßÏÇt5M0Ùµ¶ææœËÍÉñ¤¿ŸŸ_XXý?ÓUušZ뢢⬬óÙž¦Åb®àïïãs5¦Ý®­6[nn^Nn®'ý r5Ý!àá fYN»Öu]ËE0) °øEÔ«o+..(,¬¶£† lZró ¯¦“Q">>>¡¡!ÅžfDXXn^^Ɖ“úªºšJ)?¿Ð`«ÕÊÕ¬ÖÕü¸b™Í–sùÚf×W›]çêŒsXýPONÓb¶[¯®Ì$"ZÄj³Zmžž¦Åœ“«¯ª,"Zëü‚‚Óg3¹šž`&/¹Ú2S©"›2›}<ì|ÕECgf‹§§yõ²Z­\Mv÷V›ÖÚ^×5Ô8-úªŽMpfð«IÀ ôU>© k‚ð-¢õUüãñiÖÕòåj=ýÇ Õö©Éw€×ôèÚ¹–ŽT»WÓ“ tQ.óì™ À+´·–Chÿ˜»ÄuhV?ÀdÏÏ9žzhÓš]IÊñìÌ¿>$"ÿõy%C¯{ü‘[[›m)W¾·å¼W ¨¦<χOën÷ÜÚ.º¾¯Î=•²ký¿ɪéÂä ˜Œ×Z·êÚ9ZNþ°õ°Rªn‹ñÄ%¿bŸøã[³ß¯ºÅ‹‡« „`¼Ã+ÿƒ·›Z?÷LÏfêø×_,ݘZX¯yì£nÕ.ÚòÚêmö²PUr¬fþò@‡F’ýãÒe_î¿‚V$ë ^ÝöÓ¿–®>–-Áã: I¬ë¢j…©m´d¤Kt¬éð¾+(ïÁB0ÞpQ‹ ܋۳¹Ieöõº£JD2$Ïúüú×Ç4>&pÛÜ<çÃùÆÝþÂàVAÅé_²fÝÙZštôð£T‘ í‹WÌÞvÌ$"rîhÒÚùŽvŸàV=îèÛ(Ø_YÏŸÜ¿aé÷‡Ä$"Ï>ùøg[¿ë†f¦Â¬ôÝ+WmË6‰ã,û ìÒ&2Då¥ý¼fé¶3"¢µO»^ƒº·mêc;wì—Õ«¶ŸV%Ë;ë|¢Ñ¦S·’¨±&ù­¤˜];ï:z®]£`“=ïܱ=ûO+UE»ˆhÔ©kÛ[“²•­M»ÆÙºãœWg—/óƒq•÷u´T{ž}òñIiwÅ559¼sÙ7»sU,ÐeM0Þá•eŽ#‰ˆ|™ê´ð1u£ˆHTOçu]LÜ*èìos_ÿfí/Ú»/Åþòï£.ÚÒ&û§}öño}¸pýñV÷Œ¬_úÔ‘ÇÖ-š;ëýÏ–%‡vyðFG£¹×Ø;#ûÅ'oÍ]ìÛßÑhêþÐÀðCß~ñÉ[ó–lÏïøÀà ïÉ´¶Ü¢¤‹¤Ò!q§Wõ¦àsûwïØœ´ï„9æ–Ö¦jÛ•ÊÝuÖrCÓ Û>탬{½›€kŽ'èþF‡¿™ÿá¬y«“ºïk©ý"…™`¼B‹xeMp39`·kÓ…IA{ºˆˆD:ÿÿzG+¥öþ{ó/6ÇÁ/Ÿ7ï,!’¦EW*ìíÏÖ\x˜“òÃ2éØWë¯ÛK¿NÎPJÄ–±k™ôx@ë_Ddàõ>[ÞI:ª”HæþïKT ìhÙøÎîcJ‰dýúŸõª¿ÖË<¯­\dÛwZ)%g~³µº!@v”==•ÉâÐ ÿ<Äÿ8ð¯óXȉåð9€¢yPÜõßûæƒâ›¦:±8÷Ÿýû®p‰ø‘ÎûçLg ù‹kâk Ѐ$È t!0VÀ87°ø`ֈɀ2A.Ø @Øö‚JPêA#h'@8 .€Ëà:¸ î€`Œƒç`¼óa!2Dä!UH 2€Ì d¹A>P ECqB¹Ð¨*…*¡Z¨ú:]€®BÐ=hš‚~…ÞÃL‚©°2¬ Ã Ø ö†ƒá5pœçÀùðN¸®ƒÁíðø:|ŸÃ³@ˆ QC â‚ø!H,ÂG6 …H9R‡´ ]H/r A¦‘w( Š‚¢£ Q¶(OTŠ…JCm@£*QGQí¨Ô-Ô(jõ MF+¡ Ð6h/ô*t:]€.G7 ÛЗÐwÐãè7 ††ÑÁXa<1á˜Ì:L1æ¦s3€ÃÌb±Xy¬Öë‡ebØì~ì1ì9ì vûGÄ©âÌp‡+Ç5áÎâq¸y¼^ oƒ÷óñÙø|=¾ ?ŽŸ'Htv„`Ba3¡‚ÐB¸DxHxE$Õ‰ÖÄ"—¸‰XAàPð4Ð407°7ˆÔô&Ø9¸$øAˆnˆ0¤;T242´1t.Ì5¬4ld•ñªõ«®‡+„sÃ;#°¡ ³«ÝVï]=iY9´FgMÖš«kÖ&­=%ÅŒ:Ž‹nŠþÀôcÖ1gc¼bªcfX.¬}¬çlGv{ŠcÇ)åLÄÚÅ–ÆNÆÙÅ퉛Šwˆ/Ÿæºp+¹/<jæý$.$…%µ&ã’£“Oñdx‰¼ž•”¬”TƒÔ‚Ô‘4›´½i3|o~C:”¾&½S@ýLõ u…[…£öUo3C3OfIgñ²ú²õ³wdOä¸ç|½µŽµ®;W-wsîèz§õµ  1º7jlÌß8¾ÉcÓÑ͉̈́›È3É+Í{½%lKW¾rþ¦ü±­[› $ øÃÛl·ÕlGmçnïßa¾cÿŽO…ìÂkE&EåEŠYÅ×¾2ýªâ«…±;ûK,KîÂìâíÚí°ûh©tiNéØß=íeô²Â²×{£ö^-_V^³°O¸o¤Â§¢s¿æþ]û?TÆWÞ©r®j­VªÞQ=w€}`ð ãÁ–嚢š÷‡¸‡îÖzÔ¶×iוÆÎ8ü´>´¾÷kÆ× E ðŽŒ <ÚÓhÕØØ¤ÔTÒ 7 ›§ŽE»ùë7-†-µ­´Ö¢ãà¸ðø³o£¿:á}¢û$ãdËwZßU·QÚ Û¡öìö™ŽøŽ‘ÎðÎS+NuwÙvµ}oôý‘Ój§«ÎÈž)9K8›vá\ιÙó©ç§/Ä]ëŽê~pqÕÅÛ==ý—¼/]¹ì~ùb¯Sï¹+vWN_µ¹zêãZÇuËëí}}m?XüÐÖoÙß~ÃêFçMë›]ËÎ: ^¸åzëòm¯Û×לּ302tw8rxä.ûî佤{/ïgÜŸ°é!úaá#©Gå•×ý¨÷cëˆåÈ™Q×Ѿ'AOŒ±Æžÿ”þÓ‡ñü§ä§åª“f“§§Ü§n>[ýlüyêóù邟¥®~¡ûâ»_é›Y53þ’ÿrá×âWò¯Ž¼^öº{Ööñ›ä7ós…oåß}Çx×û>ìýÄ|æ쇊z»>yz¸¼°ð÷„óûþBybKGDÿ‡Ì¿ pHYs  šœ,!IDATxÚí\[×¹À•áÆN=â$¶“¶/uÚôµiº›´i›8m_’f؉'˜=ÌÞ{ ³÷^fcc ØÆ˜½Á˜ f˜½÷[ -Ýñ€¤­$t®$„ˆî÷Ëø!]Ý{õ×9ßýæ9`!àpX8, ‡…ÃÂà°pX8,–Xel‡*«®)8,PiyÇh‡&Pü³¿kÅa é"á¹$‡"Hí«‚ò< DèÎáõˆLýz!˜…ÃPïi6`ZÀaÌÂKÏnÀ:\‰à°JÛñ V„gLX8,ÏÂÄ?¼ýÆsGßúÙéU–@nh¸vD®¸¢†‚Ѫ—-™¸(_²¤ã°¥üˆ5‡([ãÓÖ![¨”´cá°¥ôE6 Ö"‡(%/8â°@¥ø{N PŠö9Ã8,PXÏ»à°@¥ðY7‡(ϸ£8,@É'xà°Àayá°@%àÕ\‚ Tr¾8,pXþ8,PÉ&à°@%‹„ÕLB0K° æbÿ£«Õ‚ºþE&Áañ©Öûó“?üá+ß;úƒþðÍϽëæh8,žBn´úùÁ?}ïKu#¢¥Ú—ïþäøÁ_X7‘qX\²òÀøØÑ÷Õ"šhð·Bk WýóÑã&å+8¬§¥ËôÕ#_FŒl}y$ü‹ÃÇ̺qXO;ûÃýgnòì˜H<½ÿT‡õoYö|ó¸ë8¿wÇ\ŽýÄk‡õ,è¼ô»ôm Œ(÷~{To‡µ!KÚßÿ¤sÛl§ýヺË8,¥Úü[·ëéüË!"‡ÅI8ñû¹¸î·¯ÝàÈ<¬æ?ü¸ £ å½ñÇDzkÉøY/ Â5ºÇ³fË2«äØ©>°#{>8þ@¶a-ëŠTEœØƒË2 «íµÏÆAûôõY†ÅŠØGŽY!öû"Ù2 kåÜÉ\ð£³|aU†aMýàïsO¿2žàä—RUTPP´uÒ‘NýhZva!ö«lyi<ÎãÃÿó¾uÛÝÅñ°¦>îá>ÞýåÙ…Ý=dÆ ËÚyV††r“C÷`™……´ìWà&Xûhý¿íU<ºwä´Ê®é€Îœü`ü艿¾9+ÃüªÂÒÁNû¡Òš ÃbÇ?g ¬…`‹ç¯sdÚýã €ÛÿÑ›½¨,ÃZµÜ Ø‡É Þo½*ӰЪÿù}Ø‘÷F *Û°Öž³ÒÚk¶Ï9RdÚsêø€ø';õøG½¨¬Ã‚Ó~üÓR¨ø''Óa™‡…²|_þUƒ ô}Ý;/ûïFC¾Ô†0ì_úSå¶$˜ï½ä°¥Ò7²zÆíÿ$z‰ÿ‹QožØ•²iƒ5ußá£Zjìouñ;¤ÓàÐoãIá·gd³Âéìk„SC(\}vÿß|y>íz|þzà\ BVq.[ ÒL&ƒF/uúä­—~êÞ¹Ù8°dð,ðìêK2 ‹ñ p<íßc‰1ìû·œ8öÆ»Ÿ©˜™*öîÇNüà¿‘o ’âëdOJxaé5nuöè³Z+OУuÇ©¿ÿÛ_þìäÉŸýò·ï«Ç÷Ðþ“Y6~†ðÊ•5†WÇ­>äªu OwUç>êžÞªžò_?l#wƒ.‹°JU”\ÏÛ~j*àîpZ»²Ò²ñ!™ƒÅÉRNÞfXžìácté{.Ê,Æ Å¢mb ó—Ÿåã ™:Ìʬ5åæíbeÿûìûs|Þ›¶¶˜åÚÌožxÀ"ÙëogŒÃ!ÏŽfóû6ËŽzC"\Ü¿bó¹Ã{VŸ¡-iÛæµ„ç¾âÛÄJõÒàW\:ÝÑI¥ô·ovt®tµ’ é¶ e¤«{¼uˆ92Ð>;•pú;§uSYsý4)‡…Ô_ñ[Ùþú·ÖíÏ×òŸI᪵<ßõ¾êÐr9T3m}®ÑnZú\,ï—yèX:é·_!º[ÝŠ«<½CîüŸõ¨¿Ÿs.$Õ° ÕA¿gòúÀ:ðú6˰n¨ñšÇ7‚˜ òEÚ¬âàš”ìb·>´"Í mvv^MsDm‹UÚФB-ßd‹4×êšsi.Î+Ò ‹¤œ%0äÞú7ŸûƒÛ&]9“GÂõF0sQ¾È -}”]Rê6€dßqF[Ü¢CWóQûÕvô–¿G¡vhfXåÕªêKÙ17)R ‹¬Z gá( (yç+¦r¯’;êlkWk–‡¯Z´±cD¿•áÝÂÕü Ô¡TÞ„hÖìQRnå6ïj6èâäV†ì,ø¯á—e_"j÷€èB£ é\{ù:×tFf;z©´i”²°¡‹VºûVà™Ž^ uZ[@gÉŠõmc‰Â윆ç†Y¤®AAɵ„Õo¿ù¼ŸæSí1jd=x¦ ªp‡B6×¶Ælü ‹ÞÕ: ¶m ]x°µ¹ÖÕ6É™émïj_^îín§t³ÝÔ,•yz_ûVóy¬å|·Ê‚›š! /lßžH +ÝÚÕ¹ßÄÝÑ|#Z×iéâP“dïâôØÇÆF×Í"2SÉ×.¾Íx<ÒÇõžý‡µ~[šÀá2õHð”²N>À›0r˜DwT„…µ¤>‚ºßVkYòÏ,ºÞè—ŠÙôrBý}ÊæÔH݆©n¬å‹µæiIÁÆ æË¦ÎwŸªðçÜU¹¡óÆì>H”oÚÎ|H:aiŒ®ÃÒ_˪Êh÷IC;3l{Ùá~áµsVsÝÆ©îLò…uXº÷“¯5˜-Yøäyô?é9G¨–b¹1ˆìœ»n4Âbß3´µÐ[«\ÿ«M×ѹñ†‰­msXÝœõ\·éÝ/]Œ¢ÚÌÆýj;¿,˶òzâaEvÑÆÖ´ë™¶|þr°Z;"}°Pz_ç,<ņ7âuÐH÷‡Òß5ÍY¤qf9Ì™[î}ÝÆ42×5@g-Ó{gÿó%)S‹al ¾¥ÆË5BÒk{Éâ¿3(Ò¥íFÂxº˜@Ð4óî¹r΃ñýyájõÌy†[žÀaç_Êbí-Xü“l•›ØPï__7 z x‹ö€ÅŒWÉâc…ÖXšÇk±˜pÒ kÕK«N˜ÏUc2Îkù­îuXÈŒµ¹pvP³6¶MYûô–÷6,¤_ÏiJ¸ö¨ÌaûÀ„©Åü^†7©¬ùÙ±KX^ öúc{§X1Nè]¬æÏ`n´_ñÔîÞ«°˜)Š÷…wD¨c'¬…h6îMXkÁꕢظÿÅþ!ZœZ¼a‘®µ‹ôlø\˜§(ó¶R!´×`!CÆö£¢áB³0s˜“/wŸ³·`ÁmÞ"Öº `Y îé[}1‘µ—`A• ‘¢Öœ!zÅÂi¸E1‚²w`13å8ËL!Ïtiú.íXk1ŠebÐzWS„Ö=}Τ½kÎS[«>">qÂ+ê+«É½kÔÒR,ù$,T„UØfœõF¤Ò¥ã&Ä XòÛ|€­=‘ÿA¼Ey¨-ªö Ò ªWå +Ák«4„A]¥lÞIȯFåǘüW±tË ›Ú<•Ç€hËã®)~öÑ—ª¦n)}d"YXô´s…b_ôxíïâIÖLÙšýw©köh¶Ë¹sî)… “½5¹‰¶_(ø—Í"„µ|M¡Aüu,”ÏÅ”«™w1þöÉÃê¾®¥y½›Îads:œµÆ0“{£ˆ¤`M»êìÄŽKìKƒb:Ù[·uÓ½Ó3­àþUééº&I‹â‚ÅêÈIŽôqt½‘^»Âõî ¹õˆ¸AÁ}©áÒv ¾ž^!†` -\­E*õí+xOj–±u8`Íxj;{ø…D…x¹Ziؤ>µúÒªé-æö?òCmG¿ˆÈð@oW-“]¢*DV’Z}‚J"ÿ'Æ”¿J>$",h&LÉüZaÇÌmý!Ë$Ï T&^UµnýÏ&P¥|¼X 2àÅ$y£œÖÉEêúͳVç†kSÝ•Œë¢…8™§Õj·ó©ÙòõQ`Aca_ëIO[uœÅ¾È –Mßøm¬ŒsÙ 1¢Bf/˜TM?m…À+ÃÉ +E²ºV¬?°ûܪ~-<,jÞ‡V^V4k"úlèÆÜ[‹•«gÞ‰U©fXGá1†8sÉ—œÇ…?1Õ%bAàA}:<=„5¨Pɯ#51mEç½5ÄZ(¼/—ß.¬ð¼ã•JaýUX%•$'EjÈÎì°v•A>?T®“,F…Õã 0Ù×®uS˜øÔMsP[­S­;,Ò•kwõPùØTÖ¼-È’~=ÊYØýª sàÊ'ÈÇo +,¦s8ȃšs窸z²˜QF@U‘uçÚ0Ÿ;ÇÜjîVÀ «@lÅ8¶ƒŸxÊÏ‘FåG`WŒ5ÀjoQ¼ƒÀÇ?¬ŸÆÁkQ´&cübƒX&âªU$àyŒnb<÷ y-†£ïÙP°Áб.’Ž7ÇЂ«äAcWp‘Æ0Wú–oÃ̼?Œtä4qY¡cgæ1ÁZÐ,. g‹ah1M’À{Û[Ø~ˆû[ ¶xâù7<{òM‰z\zøL'‚V¶%¸Ú†‚ÝÅö8 ®‚ásLZ‹íÇýbæ×(º¬š±T’ÏË4Æ qŠ7ŸîÓbH+\Ç’©oµÀT¤³æŸÀý§Ùh` ºšËå1Bî7! °f, ·¼B›#ÍÍ‘V¡s}©%µG"³B´ó· O:‰4?G"CÔÕUÚÖi¾ìŠ)Þà‘ÊVÑ»—¸¦?…Vƒí– 1|çã?ÿä÷§ü(è‘«Í™æ+2¬µÏ·„Z‘¬¿ýåÞû»s¯¹eÜÖ¼âï‡EM®xò•ÿþcÖmõ­§‚1Áʱßú¸9êÙl¥üñ—á“â 2¬n•­* f}ÜÀ†òu›šú¸tÂM·UqLÃÖÓhº&G´i˜DäVY½šyëç)Õ7¾Ã5…ÊÔE†UfÁíŽ}ÒŒ¢i*Y¸Ub¦Þ)F´/7¬ÛÿZÃÈ8RƒkÔqL0)øX"Ê2dñÐ Z{QdXYÄž°Ø!±<À•~/'bIÀúÜJ²'m#$äcÏuaöé.,vV”#oX,«€&ýp®ø[ãY‘a¥;“y‚:+I÷ßã*º¬rÀ”#«å¿À0gðÈW˜ŒÒx>#‹þõ… ï|дõ­jy‘aåÚóY¬±etþÃ~®Yëˆ)Ž6deO•4;*X)DîD¯F.ŠÌŽ·†mõnàBm‘aUšp§8Gÿ¯ ¥‡_¿©Áõ^Ž3¦…á©>i½{˜é";ný9ru³Ák9¬ËSIpÖÈ%îo?­Ý…"mg¾(à `źcóGóŒÀáv*b Ñ´Úp[ÈüÍsýFçT‘a1NóHCë—B˜¤5®+Bž!ØN?iq8øç@Á‹lqÃÌ_èÝ‚·Â²‡ï¬]6Æóß2u=ÛÔ¶6w r¤ýüÁ‹å91´`e`HrUc-™7½¸c0#¬jCðÁ¶Cˆ†t|¯&VsåCíåf í~G›Ë‚‹¡ÞÓ5tZETl/_à@ψQæóC‰j³¤V¥ {’5OÔŸ€Å²ÍÒ{®ðPN²žyiº‹©Àø*Ò®Æc«I°˜z €­9ñä¤iŽ€õ%CªåÂ\€ì¨/àç€ê?ˆª0¤ãl Ð-,\IS)éð°G0ÍAÈÈ,Ùé«òíî•rïBºp%Gð m’•5_»9ñ°Bá\ „ œ¨1-äèI*qüC;^ê%<€Œ4ÍÅBpã,;QS|kÒÃô~ŸÜKÍÂ_¢ÖȪŒ·zY»§ïÀg–‚†ÌÚÛ ò±—ëQñÉ¢§‘@ÿ8æëbQÒ#ãÑzeÜs‘z[Óì¿äPͬõ§õÛÞÙç|›XWq\òþ¬nÛç 5úr©häì[Zªq6´é·Á‡µR$g–=)bi7Ùÿb1³0Ò÷¦Ÿ”„¯Ò¶q‘ÇÕD´LºûÚ%1§º{¤½"#Þês¥ ªyÑ›ØùcÇøü’ E A+¨¸…]¯è1ÌçZ©Õtœu$Ïé}È‚k7\MU¿üç×–^iÖXÚQ>½½<~Ì™\;¥‚X¾™tTnã1—J\o‰|ű+G>g}D¡ aøÁK»…æaO?¬ÉÍÉ–z $tg„]m«ç_ò´ Eï¸CÔ ùÜ=_|^«]ˆ¥i€Yênijw³¾{Š‚2çZ²ÝM-RÇWwn z¨ÁÓÚÔ:¾²kbá, µø™ZØÆOˆ~â–Ož'|ßÚAXë¸úKâÜ,´Uä•4 ÌòZ—ÑùD&ºs¸FË=­uTå4ô͂ӛ&æEÿqzÿ@ ^½ ï(¬Mh¨§µ¡²®¥k`fs?~÷õÞÞle´·íQumsçÀÔºÓ8£k)²«PóÂ~áD ²ã°¶ÊôeÂ1«9Tb2/wÈZÔÝÔ˜ÎÙwt•8,†pøò Ä`Q5 GlD’ GM‡E9E¯–P·+X©'GN7³%Ë|Ö3Šâ_1b,™»kà”^œÜ²ÄX¡vÏ<Œ(Òª\ƒòÂwÄ î_¤Ø†Kn»ðë_DZ…ˆrfˆ´[°6ü«¦Ë}ƒE[d5Ÿåq8ô¥7+¦î{Š¿3Kr]±KþÃW_ö®ÂBÛµª$ )RÞiè:+ìÒzâ‚Å #’%HkÚüžÐË*’µË°Ð! ‚n™ ;´êTûÑ݆…ÄÛ$H«ßHÈåO™æq¬]‡….idÂ’ƒÅ tn-¬2}áSvb\‹&K{Z‚C«Y¿E˜Ql#!i€ÅÒ»Il-ŒP¬ßJ,´ñ˜‡V¥¶¾ô 1•XW7 j-–Næø2R¦>!%°K-ZE˜ƒZ+V1¨”ÀB9ñFtz¨ÊX‹àR…%©…ŽèäIphÝÓÁh=¬ißA¥|GwIr°–/`Ü¥-WiMŠ`¡S·$§ã97Ì1Ù*kа4ÁB3õG…x2**'Ùõádª²r‚4 ‘‡me͌Ŋ‡ÃèhuíÚBMù0£¥¢é‰ÚšÑK˜vóIÖçòõךÊÛYÍåmaf¨¿¢™1ÓÙ8ÐØ9\Q5I©›@vÖ²•®×ª»ŸGP‡J qý³lW¢_ÎÝj£[•u€m›¯·[L_xXpTªvPJ“rPèÞ #ÜÃUæ•K·,NºC¨UM„Yˆ~çú_>ÑÎ屺×íuoZû{•[9¶@; ­ÐÀî{ (BtûÛê¤!놸¾xŸ¼›‰”³kIè`Ç%*c8U=ÈW)—x«ój`Ø“ý’Íjìñh[.}:ç^†¦¸…ÜD­Ko‡$žK0÷ öb;ÝxlÁ˜÷ °ìØáiˆ²B0¯2¢@[$¦›Í»ë*H™5ñ>qËYfQ Hᣠ+³•)–Ùù¾õ¹ùú¥y¥^Oö¬¯y_dT›{©µ%$Á7*µ{Ð\Ñ•¢[•˜s‡TÐe´:á2¶Ó°Ð¾ó˜·£…ë„[. 9¯›™P¬…‰wƒž©¥s£)q(zc‘ÕKb^‘Eƒ™ƒÁS+²€vý"‘öÜ(¸ÄÀÔ¨ãFê´çv¶²íNNgF³ ÃëÃÇwbyhÑ&'¨ð ÌYÝP*”ñÑ5hnbq•3?¹ Ó§§hèê$‰AŸ]æÌ Ï>¥¬n¦•ú y¼Êž_€©t”¼¡àᕉ9ŽP(ur’¡@; %—\8I± 9rb-Eßk`A÷®¬IŒÖ¤^ÐÐê×y€J#,tY'Ub°ÐPWD +ÞŒ¹+°¿Ù6{ޝ‰—©LJ Ö˜"H*~èò#t`!²ñ/òôòŸÿ¡m7_¯ øæ}Bv ”ÜÐrؘ‚a‰\gešJö0¹¶1j§\ü¦J­,S¦âœ Ãl,jsì­:*ˆÔ¶®-ê§šmx.èˆTkµK Vÿׂ¬‘Óbi,ÚkʪvÒ¸òbõ#·–ùÉ…˜hRzžiKK’CAÍ…Ñ{îáÎc5ú\ÒýS=ÒlæˆYõ†<;’ž+A‚l£ÈS<·³Ö€µÎ‘_™¾dO¬@I¥öKôkžÁíc&Œrç°$t^®Ø5ô<Ñ£Èmü²žƒ>ï}Ztk$6´W‰ô|-ž¶Ñ-°æ]‹L/’ÇýÖ"+ÙíApÝÌ«þm£FŒ2b„aI¢}±Ô·0­²DsÐ'±4Ž÷ZlˆŸËª¤`Ñ-ȱ gì,¤'$ Œv‹±R±Ø_ˆÉ£4‡„/VÌ,gq‹B#ƒG† XY¡ñs£!ÝÝ×"ËùX9£j$؂˶ÿa)‹© ›éà¾$F¼©ÄŠ 2·{›iÍØ X#à ¹Õ²%•r…2´¶³êv¡» ‹”È/Hjh –ð“bÁ–zXtóhIi-(Ɔ/¤LOl£; y|vDRC«Õo‡öÊÕDúa¡t'Ž„`1ÝùDþá‡ê£è€…ö¨ÔKjhUðñgÈ֑螀Ōµ HÕ,‰Ã{`]^ذÐAý\I ­cžQ!ŠN ºG`A7Íg%‹¦Ê+ÛŒ(‘÷ ,”¤ÿMk-{enfbdx|š´ÄØ¡K¥š­cáç7®36EZÜ4S©J¹ðž…dŒ¢ä¡š$'cÍKŸzNÕÐ.ª¬{~'.µªP9\ŸâjrEîóOϪØDw‘àÛzbõ¹vÊp¸žåeªbu·ºc’Æ"õÖ幫ê9ÝÚFŸy ySU³”ªö {¾¯¡À[M×ñºj%º‡`͸щª}Jo°;’-Bż½æükÈʧÒóPÏke¿Vd¯ÀbgêØÜåá{SJœu‚Å8A b=ó$![z…»Ž÷üÞ€µä£™Å'J©¶5ê×¾¦šÆgaBzƒ“öã=àî =:Ûè¦ùÐËb vðç±+—I;,¸íÓ°mmF®bŸHßiïm«–YeÊ÷èÒ ©<“"üP.Qt««ñ¼ -òàz¥kT©†Õ¤)8äÐ-WT}Ò¥ °:ÔòuHŠaÙ\úH…ºˆÉÏY§"”F•ÇÒ ‹à4ð¡Ûf"=ÙYq@¾’«?-µ°ª5w\² e"¶©FÌ(®îˆ”ÂZv µ *uD°å©¾î ºè±f³tÂBä=Í9DxÝÛs8ÊÏ r¤=Ð|ÌçYŒ íME9‚_§Ê¸W*aÍ]ê?˜Ä³0Ì͹„¡BmÉ&Ua! ±du\…^f¡ë+,eÑ¡Þ )„ʼnܺUÀ¤¦¶öûî©›Q¹4GÆU!—8@n¸nÁ<¼~-«Ç q-\×)±•BX,óì-ªdµ&[íËô¶1Z¥[ÕL›±ÅS°ýÖ}T¨ ÙºŸÞ.i´·ˆ“ïÝz>³)„ŸÈCe[AhU3ÕÃj«Q±¨&¤u +Ösë÷X#ÚìÔ½ª”ºuh‘õË¥ýcMÛæ ”< ÍëÄlý‚,a ‰á/xøJQd”MOøS ×ÎŽÚ…ÒëƒÞ°P¸ÃJ;½w±VHXÿå EK/œªâuWr¥ÖG$Þ°J nW–2É<ÄÖlÊv4ÞúŒE´ò¤Qg}1ÄÖ€¯~nšR“°Î¶ó†Un9…z;m-BbêK!,¦F5·âõ3e ÷Þ9«ªÊµ8à¨v§°ôЏ=ÐH=2:¦{ÞD¡yëu¦j¥Ûå&7¬±~)ÉËÍãR¼UVBÚ?ˆo4·ñ;ÙËAáÞäûm\ÕmÍVÝR J7Å’‰ˆñ²)ໟ/—ñ¶Ó¼ÂB‡¾ÀPϱH6Ò4û)†õ¯!·0Da­fÜaÔ ôc× ƒopT¬ ; kýÆ©•J—ƒb’³roLJzh^pÌžƒ–”ýÄÛg1_3_¢*g}+3ïNB˜—ö9»ûÓbîÒÞqX›ÀFîGyÛêhZ¸‡'·nìVν¤Û/NVŸ¬«qd<;Ú×NWÃÌ5,©‘.þï!XÜÂðÙÿü§Õb+PŸ±zéÔÄÎßõ.Á‚³¿OÓJÌSÚG*óßYXhË‹ádˆX—?HxÞvõ» «ÿg ?ŠGÇ9íãý¾@ÆwÖìYÂÑß—‹e3WêŸÏÝ„¿»°h¡Ĩ%‰åÑŽ k½ô}B&úÝ……®N£õ ½by®9Z:Õø]†µ1$¼ˆâ˜‡pÝ¥9t™ù‡…N].ÃY¦±ºß]……ä(ˆ¾1RsqA`¡+6a"?ÄØzÛÜ`wa!Í ­¢žã’Ä6tÛ]X(=ÌFľIº^*[F`¡Cú¢ @{•Xhª‘Há‚UÓ$¶ìÀZ6ÅDÊ7FeZ«"Â:ÌK¶±Y‚Åñ~±\¤Dg•%XèìźˆËŽˆlÁB2Õ„Œr"U*’Ü~R`¡tS!·³\³‹@e ÒvI(×]"É,”e"LP˜a‡Ê,tT/]E]-±pƒTÁ‚3µ°;-lÝdXa¡Ëv¡˜½–2%*“°ÐZ-¬ý¬’ 7H,È×£Ÿ¯3ŽÊ(,tB­SbŒ¼5Ü0u=¶ŒY““½¡È†oÆ”M?¢‘®•GÜœX¸™ &Åö.ÞŠÊ_-‰½>¹Ça¡é˜–ÎÛn€ˆ72<ЬürÕf! r½ó¡ÿü¨U•i~h±WtFX}ì\›,õŒÔ:ƒ„”ʽ‹n…A-ÙÆ>=WHèM—JØ´:ÐâA¥›yh©ßü°I¬?Jž¸ÒƒÎW):}]åbéÓuÝÚ¦t¯ÃB»Ï€·š"%:[âX,ÕæÉXÏj–Yir-«¦Ê:Ú³£D7Íf1¯D·|ân{F{huöc»ðÂGwŒö<,v”)°Ïn@* LüHQÙ~£ë=0³tÔr½:îiàÜ]olÁÊ7µ¼ßmlAl07K—VXŒæÚ>h¤£¡usÚô×w°§zZ{:¦ëëæYÝO„ Hê ÛúÀ•*\B„M…QÙøgýæ*„rèl…˜á0àõ—XÂZe#ð–RXP–?±ÉÑÑO£,rÐ#Üëa˜~„110ÀÅÓ{ð¬ÿ[’BªS á†k»1öwC­0)e†Üί Ò¾níãʰJ«´¡AJ_<Zuö2àÚK¤ï$,¦viÞÌŒL{\ÔeS|ã¾{âÐQi·ÌT¬R|úàGWêÀ ñèwü@×ҪǦ…³Ñ¢;iãàÞá—;µ/‡† i<­¦Yav Å7HLÁ³'çáE&²ºÑ~‚ÌØ+4t™…® R8[ãÉãZ™‚•/Kâáé36iéú‚[ÅJ•I(kS ´ãiz·Ù8¬Mi—kC¤.Ü ­°ÐpÓíIÈ&·Ø8¬oe^5 S¸A¦a!.Í` 7È4,tÍÙ›?¤Xg×–4ÂB;UøoB½L”huƒôÃbÅš‘1„d:aœÊgø¬Ù^CqXOK†Þ¿Ë®à•ñŽÚÚŽq2üm¸a‡Å5€ÂÖ=Ix2Ãáâ¿>|÷wÞ=õ¯‹™“0Ã8Åam•6åÆÅ„‹¿yãð¡“ï}¦¤ôÙ{'~ã7]µæqX\ÂŒ»ò‡W_ýµÞíþé¹EòÊâÜtªÞ¯^=òó˜i6k‹å÷«ï¿¥]Ef>apALr¥Ö[‡¿„ÃzÊÐÒ:ø¶é#×á4˜¼}H§ ‡õ_3+óÔ¹>¶\}ñÀGYxˆæßê*øÇ'ܶ)F˜p9q2”…ÃÚWþÇÞ¾¿mó5ýçÇX8¬u5óúÉ"v¸àÇ?ˆƒqXHÑÛ'î ÔHì»ÇYŒÈ<¬Ñ/ŽÄ<0¢Žœ“uXt¯}†@fÔ¢þ>_†ŒÃjüùÿ®á]÷³·›eÕyŸà²W4}®T™†5ð‹÷÷³èøóÛƒ² ‹“üœ)p«%Çø¹TŽ ÃZÓx-üè['®PdÖìÿ¾¿¥/uµ­¾±±¾“ŠÂÓ\fÕàŸ~>'ð:Èm ?Ÿ}ûWçlIÙ܆…;eœqИûEG¯uôƧà^/×ðP&,³°8/»sÇ ìÝ”Õý!7,——#!™…Åö~-š7,}üOnX‘'|9² Ëãõ~°šxÀŠ{ÝKvaq‚_õÇË÷ÕPÙ…%¶æ†e㲫ñ·ocyä–ìê,¤æ€:÷ÜLNÛ4ªœ¸¬Qy±‘YXèø±OVÀ^þç‰ITva-þV øÑE?ýrE†a1^p?Úù… – ÃBš^þ¸›uæÌ+-ˆ ÃB”_Nô`àä—UQY†…äþp/žáÏç!2 %i<”šg…>EòemR– «üù/A6k@*ßþE5*ë°XÁ/ý H¦óïGw£ÜAÚÒ÷KûÏŒ :häËF»Q¥%u…!“_”Ûö‘ʽ(7…â°6œžó‡þñp›9Æ,ÿèÐÅ ‡µ)Óf'ÞŒâkC-^;yÂ|Åa}+Ôë¿:t¥„'®ÅbÍC¿¾ACqXÿµ j.:®yŸË"¥k;¤P»kÍ;RZÚ=“"ðø×Ä»ýÿQõpÿ]âWÇ^NÙ½»’RXë¸rÕ_Û÷£÷ÎêØùÆÆúÚéœ}ï‡û^×È›ÙÍ{’ZXëú©;Ãü‡¿|ìõ×=|ø]‹ÌžÅݽ#)†µ>õh‹Cõ¹‰>>‰y C‹4x·ïGªam*{˜UT Áˆ4Ü‹ÔÃZ·éå 鸓=«ä•óS8,@õ*áx9 LO xÐpX ÝÝG |0ŠÃ²@x¡‚XÇÖa=cHÅa ŽÇú,$~:‚ÃðyÞ}fÖ ‰Kà,Ì8HØ”/è8,°òí¬.íÿž¡ã2K Ð×Öòάñi$G-Òq'{Vù+¨<8lÃÄaÂ:d‹Ã•²ƒv, ”¾hÏÆaÂ:@ÄaJÉ~G PŠ¿ç„ÇàA¥hŸ3ŒÃ…õ¼  T ŸsEpX€RðŒ;ŠÃ”|‚ – TòÞ8,PÉ%øà°@%‡à‹Ã‡åÕlB T²A8,PÉ$ã°@%ƒ‚Ã6JÆâ°@e"±‡µ÷‡…ÃÂaá°pX8,\pX8,Ö®Ëÿé•aùIëžðUtEXtcommentFile source: https://wiki.openstack.org/wiki/File:Mistral_direct_workflow.png×ËO%tEXtdate:create2014-09-24T17:27:30+00:00,ÕÏ'%tEXtdate:modify2014-09-24T17:27:30+00:00]ˆw›FtEXtsoftwareImageMagick 6.6.9-7 2014-03-06 Q16 http://www.imagemagick.orgÓ³ÃtEXtThumb::Document::Pages1§ÿ»/tEXtThumb::Image::height485Ž´ŒtEXtThumb::Image::Width343»½tEXtThumb::Mimetypeimage/png?²VNtEXtThumb::MTime1411579650 ;ÖtEXtThumb::Size29.5KBB³TqÞHtEXtThumb::URIfile:///srv/mediawiki/images/5/5f/Mistral_direct_workflow.png­‰-IEND®B`‚mistral-6.0.0/doc/source/quickstart.rst0000666000175100017510000001740013245513261020213 0ustar zuulzuul00000000000000Quick Start =========== Prerequisites ------------- Before you start following this guide, make sure you have completed these three prerequisites. Install and run Mistral ~~~~~~~~~~~~~~~~~~~~~~~ Go through the installation manual: :doc:`Mistral Installation Guide ` Install Mistral client ~~~~~~~~~~~~~~~~~~~~~~ To install mistralclient, please refer to :doc:`Mistral Client / CLI Guide ` Export Keystone credentials ~~~~~~~~~~~~~~~~~~~~~~~~~~~ To use the OpenStack command line tools you should specify environment variables with the configuration details for your OpenStack installation. The following example assumes that the Identity service is at ``127.0.0.1:5000``, with a user ``admin`` in the ``admin`` tenant whose password is ``password``: .. code-block:: bash $ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/ $ export OS_TENANT_NAME=admin $ export OS_USERNAME=admin $ export OS_PASSWORD=password Write a workflow ---------------- For example, we have the following workflow. .. code-block:: mistral --- version: "2.0" my_workflow: type: direct input: - names tasks: task1: with-items: name in <% $.names %> action: std.echo output=<% $.name %> on-success: task2 task2: action: std.echo output="Done" This simple workflow iterates through a list of names in ``task1`` (using `with-items`), stores them as a task result (using the `std.echo` action) and then stores the word "Done" as a result of the second task (`task2`). To learn more about the Mistral Workflows and what you can do, read the :doc:`Mistral Workflow Language specification ` Upload the workflow ------------------- Use the *Mistral CLI* to create the workflow:: $ mistral workflow-create The output should look similar to this:: +------------------------------------+-------------+--------+---------+---------------------+------------+ |ID | Name | Tags | Input | Created at | Updated at | +------------------------------------+-------------+--------+---------+---------------------+------------+ |9b719d62-2ced-47d3-b500-73261bb0b2ad| my_workflow | | names | 2015-08-13 08:44:49 | None | +------------------------------------+-------------+--------+---------+---------------------+------------+ Run the workflow and check the result ------------------------------------- Use the *Mistral CLI* to start the new workflow, passing in a list of names as JSON:: $ mistral execution-create my_workflow '{"names": ["John", "Mistral", "Ivan", "Crystal"]}' Make sure the output is like the following:: +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | ID | 49213eb5-196c-421f-b436-775849b55040 | | Workflow ID | 9b719d62-2ced-47d3-b500-73261bb0b2ad | | Workflow name | my_workflow | | Description | | | Task Execution ID | | | State | RUNNING | | State info | None | | Created at | 2017-03-06 11:24:10 | | Updated at | 2017-03-06 11:24:10 | +-------------------+--------------------------------------+ After a moment, check the status of the workflow execution (replace the example execution id with the ID output above):: $ mistral execution-get 49213eb5-196c-421f-b436-775849b55040 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | ID | 49213eb5-196c-421f-b436-775849b55040 | | Workflow ID | 9b719d62-2ced-47d3-b500-73261bb0b2ad | | Workflow name | my_workflow | | Description | | | Task Execution ID | | | State | SUCCESS | | State info | None | | Created at | 2017-03-06 11:24:10 | | Updated at | 2017-03-06 11:24:20 | +-------------------+--------------------------------------+ The status of each **task** also can be checked:: $ mistral task-list 49213eb5-196c-421f-b436-775849b55040 +--------------------------------------+-------+---------------+--------------------------------------+---------+------------+---------------------+---------------------+ | ID | Name | Workflow name | Execution ID | State | State info | Created at | Updated at | +--------------------------------------+-------+---------------+--------------------------------------+---------+------------+---------------------+---------------------+ | f639e7a9-9609-468e-aa08-7650e1472efe | task1 | my_workflow | 49213eb5-196c-421f-b436-775849b55040 | SUCCESS | None | 2017-03-06 11:24:11 | 2017-03-06 11:24:17 | | d565c5a0-f46f-4ebe-8655-9eb6796307a3 | task2 | my_workflow | 49213eb5-196c-421f-b436-775849b55040 | SUCCESS | None | 2017-03-06 11:24:17 | 2017-03-06 11:24:18 | +--------------------------------------+-------+---------------+--------------------------------------+---------+------------+---------------------+---------------------+ Check the result of task *'task1'*:: $ mistral task-get-result f639e7a9-9609-468e-aa08-7650e1472efe [ "John", "Mistral", "Ivan", "Crystal" ] If needed, we can go deeper and look at a list of the results of the **action_executions** of a single task:: $ mistral action-execution-list f639e7a9-9609-468e-aa08-7650e1472efe +--------------------------------------+----------+---------------+-----------+--------------------------------------+---------+----------+---------------------+---------------------+ | ID | Name | Workflow name | Task name | Task ID | State | Accepted | Created at | Updated at | +--------------------------------------+----------+---------------+-----------+--------------------------------------+---------+----------+---------------------+---------------------+ | 4e0a60be-04df-42d7-aa59-5107e599d079 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:16 | | 5bd95da4-9b29-4a79-bcb1-298abd659bd6 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:16 | | 6ae6c19e-b51b-4910-9e0e-96c788093715 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:16 | | bed5a6a2-c1d8-460f-a2a5-b36f72f85e19 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:17 | +--------------------------------------+----------+---------------+-----------+--------------------------------------+---------+----------+---------------------+---------------------+ Check the result of the first **action_execution**:: $ mistral action-execution-get-output 4e0a60be-04df-42d7-aa59-5107e599d079 { "result": "John" } **Congratulations! Now you are ready to use OpenStack Workflow Service!** mistral-6.0.0/doc/source/overview.rst0000666000175100017510000000754413245513261017677 0ustar zuulzuul00000000000000Mistral Overview ================ What is Mistral? ---------------- Mistral is a workflow service. Most business processes consist of multiple distinct interconnected steps that need to be executed in a particular order in a distributed environment. A user can describe such a process as a set of tasks and their transitions. After that, it is possible to upload such a description to Mistral, which will take care of state management, correct execution order, parallelism, synchronization and high availability. Mistral also provides flexible task scheduling so that it can run a process according to a specified schedule (for example, every Sunday at 4.00pm) instead of running it immediately. In Mistral terminology such a set of tasks and relations between them is called a **workflow**. Main use cases -------------- Task scheduling - Cloud Cron ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A user can use Mistral to schedule tasks to run within a cloud. Tasks can be anything from executing local processes (shell scripts, binaries) on specified virtual instances to calling REST APIs accessible in a cloud environment. They can also be tasks related to cloud management like creating or terminating virtual instances. It is important that several tasks can be combined in a single workflow and run in a scheduled manner (for example, on Sundays at 2.00 am). Mistral will take care of their parallel execution (if it's logically possible) and fault tolerance, and will provide workflow execution management/monitoring capabilities (stop, resume, current status, errors and other statistics). Cloud environment deployment ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A user or a framework can use Mistral to specify workflows needed for deploying environments consisting of multiple VMs and applications. Long-running business process ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A user makes a request to run a complex multi-step business process and wants it to be fault-tolerant so that if the execution crashes at some point on one node then another active node of the system can automatically take on and continue from the exact same point where it stopped. In this use case the user splits the business process into a set of tasks and lets Mistral handle them, in the sense that it serves as a coordinator and decides what particular task should be started at what time. So that Mistral calls back with "Execute action X, here is the data". If an application that executes action X dies then another instance takes the responsibility to continue the work. Big Data analysis & reporting ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A data analyst can use Mistral as a tool for data crawling. For example, in order to prepare a financial report the whole set of steps for gathering and processing required report data can be represented as a graph of related Mistral tasks. As with other cases, Mistral makes sure to supply fault tolerance, high availability and scalability. Live migration ^^^^^^^^^^^^^^ A user specifies tasks for VM live migration triggered upon an event from Ceilometer (CPU consumption 100%). Rationale --------- The main idea behind the Mistral service includes the following main points: - Ability to upload custom workflow definitions. - The actual task execution may not be performed by the service itself. The service can rather serve as a coordinator for other worker processes that do the actual work, and notify back about task execution results. In other words, task execution may be asynchronous, thus providing flexibility for plugging in any domain specific handling and opportunities to make this service scalable and highly available. - The service provides a notion of **task action**, which is a pluggable piece of logic that a workflow task is associated with. Out of the box, the service provides a set of standard actions for user convenience. However, the user can create custom actions based on the standard action pack. mistral-6.0.0/doc/source/conf.py0000666000175100017510000001067013245513261016570 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys on_rtd = os.environ.get('READTHEDOCS', None) == 'True' # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinxcontrib.pecanwsme.rest', 'sphinxcontrib.httpdomain', 'wsmeext.sphinxext', 'openstackdocstheme', 'oslo_policy.sphinxext', 'oslo_policy.sphinxpolicygen', ] wsme_protocols = ['restjson'] suppress_warnings = ['app.add_directive'] # Add any paths that contain templates here, relative to this directory. # templates_path = ['_templates'] # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Mistral' copyright = u'2014, Mistral Contributors' policy_generator_config_file = \ '../../tools/config/policy-generator.mistral.conf' sample_policy_basename = '_static/mistral' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. from mistral.version import version_info release = version_info.release_string() version = version_info.version_string() # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_static_path = ['_static'] html_theme = 'openstackdocs' # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['mistral.'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # Must set this variable to include year, month, day, hours, and minutes. html_last_updated_fmt = '%Y-%m-%d %H:%M' # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". html_title = 'Mistral' # Custom sidebar templates, maps document names to template names. html_sidebars = { 'index': [ 'sidebarlinks.html', 'localtoc.html', 'searchbox.html', 'sourcelink.html' ], '**': [ 'localtoc.html', 'relations.html', 'searchbox.html', 'sourcelink.html' ] } # -- Options for manual page output ------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'mistral', u'Mistral', [u'OpenStack Foundation'], 1) ] # If true, show URL addresses after external links. man_show_urls = True # -- Options for openstackdocstheme ------------------------------------------- repository_name = 'openstack/mistral' bug_project = 'mistral' bug_tag = '' mistral-6.0.0/doc/source/contributor/0000775000175100017510000000000013245513604017636 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/contributor/extending_yaql.rst0000666000175100017510000001232013245513261023402 0ustar zuulzuul00000000000000=================================== How to write a custom YAQL function =================================== ******** Tutorial ******** 1. Create a new Python project, an empty folder, containing a basic ``setup.py`` file. .. code-block:: bash $ mkdir my_project $ cd my_project $ vim setup.py .. code-block:: python try: from setuptools import setup, find_packages except ImportError: from distutils.core import setup, find_packages setup( name="project_name", version="0.1.0", packages=find_packages(), install_requires=["mistral", "yaql"], entry_points={ "mistral.yaql_functions": [ "random_uuid = my_package.sub_package.yaql:random_uuid_" ] } ) Publish the ``random_uuid_`` function in the ``entry_points`` section, in the ``mistral.yaql_functions`` namespace in ``setup.py``. This function will be defined later. Note that the package name will be used in Pip and must not overlap with other packages installed. ``project_name`` may be replaced by something else. The package name (``my_package`` here) may overlap with other packages, but module paths (``.py`` files) may not. For example, it is possible to have a ``mistral`` package (though not recommended), but there must not be a ``mistral/version.py`` file, which would overlap with the file existing in the original ``mistral`` package. ``yaql`` and ``mistral`` are the required packages. ``mistral`` is necessary in this example only because calls to the Mistral Python DB API are made. For each entry point, the syntax is: .. code-block:: python " = :" ``stevedore`` will detect all the entry points and make them available to all Python applications needing them. Using this feature, there is no need to modify Mistral's core code. 2. Create a package folder. A package folder is directory with a ``__init__.py`` file. Create a file that will contain the custom YAQL functions. There are no restrictions on the paths or file names used. .. code-block:: bash $ mkdir -p my_package/sub_package $ touch my_package/__init__.py $ touch my_package/sub_package/__init__.py 3. Write a function in ``yaql.py``. That function might have ``context`` as first argument to have the current YAQL context available inside the function. .. code-block:: bash $ cd my_package/sub_package $ vim yaql.py .. code-block:: python from uuid import uuid5, UUID from time import time def random_uuid_(context): """generate a UUID using the execution ID and the clock""" # fetch the current workflow execution ID found in the context execution_id = context['__execution']['id'] time_str = str(time()) execution_uuid = UUID(execution_id) return uuid5(execution_uuid, time_str) This function returns a random UUID using the current workflow execution ID as a namespace. The ``context`` argument will be passed by Mistral YAQL engine to the function. It is invisible to the user. It contains variables from the current task execution scope, such as ``__execution`` which is a dictionary with information about the current workflow execution such as its ``id``. Note that errors can be raised and will be displayed in the task execution state information in case they are raised. Any valid Python primitives may be returned. The ``context`` argument is optional. There can be as many arguments as wanted, even list arguments such as ``*args`` or dictionary arguments such as ``**kwargs`` can be used as function arguments. For more information about YAQL, read the `official YAQL documentation `_. 4. Install ``pip`` and ``setuptools``. .. code-block:: bash $ curl https://bootstrap.pypa.io/get-pip.py | python $ pip install --upgrade setuptools $ cd - 5. Install the package (note that there is a dot ``.`` at the end of the line). .. code-block:: bash $ pip install . 6. The YAQL function can be called in Mistral using its name ``random_uuid``. The function name in Python ``random_uuid_`` does not matter, only the entry point name ``random_uuid`` does. .. code-block:: yaml my_workflow: tasks: my_action_task: action: std.echo publish: random_id: <% random_uuid() %> input: output: "hello world" **************** Updating changes **************** After any new created functions or any modification in the code, re-run ``pip install .`` and restart Mistral. *********** Development *********** While developing, it is sufficient to add the root source folder (the parent folder of ``my_package``) to the ``PYTHONPATH`` environment variable and the line ``random_uuid = my_package.sub_package.yaql:random_uuid_`` in the Mistral entry points in the ``mistral.yaql_functions`` namespace. If the path to the parent folder of ``my_package`` is ``/path/to/my_project``. .. code-block:: bash $ export PYTHONPATH=$PYTHONPATH:/path/to/my_project $ vim $(find / -name "mistral.*egg-info*")/entry_points.txt .. code-block:: ini [entry_points] mistral.yaql_functions = random_uuid = my_package.sub_package.yaql:random_uuid_ mistral-6.0.0/doc/source/contributor/creating_custom_action.rst0000666000175100017510000000216413245513261025117 0ustar zuulzuul00000000000000============================ How to write a Custom Action ============================ 1. Write a class inherited from mistral.actions.base.Action .. code-block:: python from mistral.actions import base class RunnerAction(base.Action): def __init__(self, param): # store the incoming params self.param = param def run(self): # return your results here return {'status': 0} 2. Publish the class in a namespace (in your ``setup.cfg``) .. code-block:: ini [entry_points] mistral.actions = example.runner = my.mistral_plugins.somefile:RunnerAction 3. Reinstall Mistral if it was installed in system (not in virtualenv). 4. Run db-sync tool via either .. code-block:: console $ tools/sync_db.sh --config-file or .. code-block:: console $ mistral-db-manage --config-file populate 5. Now you can call the action ``example.runner`` .. code-block:: yaml my_workflow: tasks: my_action_task: action: example.runner input: param: avalue_to_pass_in mistral-6.0.0/doc/source/contributor/troubleshooting.rst0000666000175100017510000000453613245513261023630 0ustar zuulzuul00000000000000Troubleshooting And Debugging ============================= Mistral-Dashboard debug instructions ------------------------------------ **Pycharm** Debugging OpenStack Mistral-Dashboard is the same as debugging OpenStack Horizon. The following instructions should get you sorted to debug both on the same run. Set PyCharm debug settings: 1. Under File > Settings > Languages and Framework > Django - Enter the following: a. Check "Enable Django Support" b. Django project root: your file system path to Horizon project root c. Settings: openstack_dashboard/settings.py (under your Horizon folder) d. Manage script: manage.py (also in your horizon folder) e. Click OK .. image:: ../img/Mistral_dashboard_django_settings.png 2. Enter debug configurations menu, using the tiny arrow pointing down, left to the "play" icon, or under the run menu .. image:: ../img/Pycharm_run_config_menu.png 3. In the new window, click the green plus icon and then select "Django server" to create a new Django Server configuration. 4. In the new window appeared: a. Name that configuration Horizon b. Enter some port so it won't run on the default (for example - port: 4000) .. image:: ../img/Mistral_dashboard_debug_config.png 5. Click on Environment variables button, then in the new window: a. Make sure you have PYTHONUNBUFFERED set as 1 b. Create a new pair - DJANGO_SETTINGS_MODULE : openstack_dashboard.settings c. When finished click OK. .. image:: ../img/Mistral_dashboard_environment_variables.png You should now be able to debug and run the project using PyCharm. PyCharm will listen to any changes you make and restart the Horizon server automatically. **Note**: When executing the project via PyCharm Run / Debug, you could get an error page after trying to login: "Page not found (404)". To resolve that - remove the port from the browser URL bar, then login. You should be able to login without it. After a successful login bring the port back - it will continue your session. **Further notes** - If you need help with PyCharm and general debugging, please refer to: `JetBrains PyCharm developer guide `_ - If you would like to manually restart the apache server, open a terminal and run:: $ sudo service apache2 restart *(if not under Ubuntu, replace "sudo" with an identical command)* mistral-6.0.0/doc/source/contributor/debug.rst0000666000175100017510000000334313245513261021462 0ustar zuulzuul00000000000000Mistral Debugging Guide ======================= To debug using a local engine and executor without dependencies such as RabbitMQ, make sure your ``/etc/mistral/mistral.conf`` has the following settings:: [DEFAULT] rpc_backend = fake [pecan] auth_enable = False and run the following command in *pdb*, *PyDev* or *PyCharm*:: mistral/cmd/launch.py --server all --config-file /etc/mistral/mistral.conf --use-debugger .. note:: In PyCharm, you also need to enable the Gevent compatibility flag in Settings -> Build, Execution, Deployment -> Python Debugger -> Gevent compatible. Without this setting, PyCharm will not show variable values and become unstable during debugging. Running unit tests in PyCharm ----------------------------- In order to be able to conveniently run unit tests, you need to: 1. Set unit tests as the default runner: Settings -> Tools -> Python Integrated Tools -> Default test runner: Unittests 2. Enable test detection for all classes: Run/Debug Configurations -> Defaults -> Python tests -> Unittests -> uncheck Inspect only subclasses of unittest.TestCase Running examples ---------------- To run the examples find them in mistral-extra repository (https://github.com/openstack/mistral-extra) and follow the instructions on each example. Tests ----- You can run some of the functional tests in non-openstack mode locally. To do this: #. set ``auth_enable = False`` in the ``mistral.conf`` and restart Mistral #. execute:: $ ./run_functional_tests.sh To run tests for only one version need to specify it:: $ bash run_functional_tests.sh v1 More information about automated tests for Mistral can be found on `Mistral Wiki `_. mistral-6.0.0/doc/source/contributor/asynchronous_actions.rst0000666000175100017510000001336713245513261024656 0ustar zuulzuul00000000000000===================================== How to work with asynchronous actions ===================================== ******* Concept ******* .. image:: /img/Mistral_actions.png During a workflow execution Mistral eventually runs actions. Action is a particular function (or a piece of work) that a workflow task is associated to. Actions can be synchronous and asynchronous. Synchronous actions are actions that get completed without a 3rd party, i.e. by Mistral itself. When Mistral engine schedules to run a synchronous action it sends its definition and parameters to Mistral executor, then executor runs it and upon its completion sends a result of the action back to Mistral engine. In case of asynchronous actions executor doesn't send a result back to Mistral. In fact, the concept of asynchronous action assumes that a result won't be known at a time when executor is running it. It rather assumes that action will just delegate actual work to a 3rd party which can be either a human or a computer system (e.g. a web service). So an asynchronous action's run() method is supposed to just send a signal to something that is capable of doing required job. Once the 3rd party has done the job it takes responsibility to send result of the action back to Mistral via Mistral API. Effectively, the 3rd party just needs to update the state of corresponding action execution object. To make it possible it must know corresponding action execution id. It's worth noting that from Mistral engine perspective the schema is essentially the same in case of synchronous and asynchronous actions. If action is synchronous, then executor immediately sends a result back with RPC mechanism (most often, a message queue as a transport) to Mistral engine after action completion. But engine itself is not waiting anything proactively, its architecture is fully on asynchronous messages. So in case of asynchronous action the only change is that executor is not responsible for sending action result, something else takes over. Let's see what we need to keep in mind when working with asynchronous actions. ****** How to ****** Currently, Mistral comes with one asynchronous action out of the box, "mistral_http". There's also "async_noop" action that is also asynchronous but it's mostly useful for testing purposes because it does nothing. "mistral_http" is an asynchronous version of action "http" sending HTTP requests. Asynchrony is controlled by action's method is_sync() which should return *True* for synchronous actions and *False* for asynchronous. Let's see how "mistral_http" action works and how to use it step by step. We can imagine that we have a simple web service playing a role of 3rd party system mentioned before accessible at http://my.webservice.com. And if we send an HTTP request to that url then our web service will do something useful. To keep it simple, let's say our web service just calculates a sum of two numbers provided as request parameters "a" and "b". 1. Workflow example =================== .. code-block:: yaml --- version: '2.0' my_workflow: tasks: one_plus_two: action: mistral_http url=http://my.webservice.com input: params: a: 1 b: 2 So our workflow has just one task "one_plus_two" that sends a request to our web service and passes parameters "a" and "b" in a query string. Note that we specify "url" right after action name but "params" in a special section "input". This is because there's no one-line syntax for dictionaries currently in Mistral. But both "url" and "params" are basically just parameters of action "mistral_http". It is important to know that when "mistral_http" action sends a request it includes special HTTP headers that help identify action execution object. These headers are: - **Mistral-Workflow-Name** - **Mistral-Workflow-Execution-Id** - **Mistral-Task-Id** - **Mistral-Action-Execution-Id** - **Mistral-Callback-URL** The most important one is "Mistral-Action-Execution-Id" which contains an id of action execution that we need to calculate result for. Using that id a 3rd party can deliver a result back to Mistral once it's calculated. If a 3rd party is a computer system it can just call Mistral API via HTTP using header "Mistral-Callback-URL" which contains a base URL. However, a human can also do it, the simplest way is just to use Mistral CLI. Of course, this is a practically meaningless example. It doesn't make sense to use asynchronous actions for simple arithmetic operations. Real examples when asynchronous actions are needed may include: - **Analysis of big data volumes**. E.g. we need to run an external reporting tool. - **Human interaction**. E.g. an administrator needs to approve allocation of resources. In general, this can be anything that takes significant time, such as hours, days or weeks. Sometimes duration of a job may be even unpredictable (it's reasonable though to try to limit such jobs with timeout policy in practice). The key point here is that Mistral shouldn't try to wait for completion of such job holding some resources needed for that in memory. An important aspect of using asynchronous actions is that even when we interact with 3rd party computer systems a human can still trigger action completion by just calling Mistral API. 2. Pushing action result to Mistral =================================== Using CLI: .. code-block:: console $ mistral action-execution-update --state SUCCESS --output 3 This command will update "state" and "output" of action execution object with corresponding id. That way Mistral will know what the result of this action is and decide how to proceed with workflow execution. Using raw HTTP:: POST /v2/action-executions/ { "state": "SUCCESS", "output": 3 } mistral-6.0.0/doc/source/contributor/index.rst0000666000175100017510000000026713245513261021505 0ustar zuulzuul00000000000000Developer's Reference ===================== .. toctree:: :maxdepth: 3 creating_custom_action extending_yaql asynchronous_actions devstack debug troubleshooting mistral-6.0.0/doc/source/contributor/devstack.rst0000666000175100017510000000050013245513261022170 0ustar zuulzuul00000000000000Mistral Devstack Installation ============================= 1. Download DevStack:: $ git clone https://github.com/openstack-dev/devstack.git $ cd devstack 2. Add this repo as an external repository, edit ``localrc`` file:: enable_plugin mistral https://github.com/openstack/mistral 3. Run ``stack.sh`` mistral-6.0.0/doc/source/index.rst0000666000175100017510000000165213245513272017134 0ustar zuulzuul00000000000000Welcome to Mistral's documentation! =================================== Mistral is the OpenStack workflow service. This project aims to provide a mechanism to define tasks and workflows without writing code, manage and execute them in the cloud environment. Overview -------- .. toctree:: :maxdepth: 1 overview quickstart architecture terminology/index main_features cookbooks User guide ---------- **Installation** .. toctree:: :maxdepth: 2 install/index configuration/index **API** .. toctree:: :maxdepth: 2 api/index **Mistral Workflow Language** .. toctree:: :maxdepth: 2 user/wf_lang_v2 **CLI** .. toctree:: :maxdepth: 1 cli/index Developer guide --------------- .. toctree:: :maxdepth: 2 contributor/index Admin guide ----------- .. toctree:: :maxdepth: 2 admin/index Indices and tables ================== * :ref:`genindex` * :ref:`search` mistral-6.0.0/doc/source/install/0000775000175100017510000000000013245513604016732 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/install/installation_guide.rst0000666000175100017510000001462613245513272023356 0ustar zuulzuul00000000000000Mistral Installation Guide ========================== Prerequisites ------------- It is necessary to install some specific system libs for installing Mistral. They can be installed on most popular operating system using their package manager (for Ubuntu - *apt*, for Fedora - *dnf*, CentOS - *yum*, for Mac OS - *brew* or *macports*). The list of needed packages is shown below: 1. **python-dev** 2. **python-setuptools** 3. **python-pip** 4. **libffi-dev** 5. **libxslt1-dev (or libxslt-dev)** 6. **libxml2-dev** 7. **libyaml-dev** 8. **libssl-dev** In case of Ubuntu, just run:: $ apt-get install python-dev python-setuptools python-pip libffi-dev \ libxslt1-dev libxml2-dev libyaml-dev libssl-dev **NOTE:** **Mistral can be used without authentication at all or it can work with OpenStack.** In case of OpenStack, it works **only on Keystone v3**, make sure **Keystone v3** is installed. Installation ------------ **NOTE**: If it is needed to install Mistral using devstack, please refer to :doc:`Mistral Devstack Installation ` First of all, clone the repo and go to the repo directory:: $ git clone https://github.com/openstack/mistral.git $ cd mistral Generate config:: $ tox -egenconfig Configure Mistral as needed. The configuration file is located in ``etc/mistral.conf.sample``. You will need to modify the configuration options and then copy it into ``/etc/mistral/mistral.conf``. For details see :doc:`Mistral Configuration Guide ` **Virtualenv installation**:: $ tox This will install necessary virtual environments and run all the project tests. Installing virtual environments may take significant time (~10-15 mins). **Local installation**:: $ pip install -e . or:: $ pip install -r requirements.txt $ python setup.py install **NOTE**: Differences *pip install -e* and *setup.py install*. **pip install -e** works very similarly to **setup.py install** or the EasyInstall tool, except that it doesn’t actually install anything. Instead, it creates a special .egg-link file in the deployment directory, that links to your project’s source code. Before the first run -------------------- After installation you will see **mistral-server** and **mistral-db-manage** commands in your environment, either in system or virtual environment. **NOTE**: In case of using **virtualenv**, all Mistral related commands available via **tox -evenv --**. For example, *mistral-server* is available via *tox -evenv -- mistral-server*. **mistral-db-manage** command can be used for migrations. For updating the database to the latest revision type:: $ mistral-db-manage --config-file upgrade head Before starting Mistral server, run *mistral-db-manage populate* command. It prepares the DB, creates in it with all standard actions and standard workflows which Mistral provides for all Mistral users.:: $ mistral-db-manage --config-file populate For more detailed information about *mistral-db-manage* script please see :doc:`Mistral Upgrade Guide `. **NOTE**: For users who want a dry run with **SQLite** database backend(not used in production), *mistral-db-manage* is not recommended for database initialization because of `SQLite limitations `_. Please use sync_db script described below instead for database initialization. **If you use virtualenv**:: $ tools/sync_db.sh --config-file **Or run sync_db directly**:: $ python tools/sync_db.py --config-file Running Mistral API server -------------------------- To run Mistral API server perform the following command in a shell:: $ mistral-server --server api --config-file Running Mistral Engines ----------------------- To run Mistral Engine perform the following command in a shell:: $ mistral-server --server engine --config-file Running Mistral Task Executors ------------------------------ To run Mistral Task Executor instance perform the following command in a shell:: $ mistral-server --server executor --config-file Note that at least one Engine instance and one Executor instance should be running so that workflow tasks are processed by Mistral. Running Multiple Mistral Servers Under the Same Process ------------------------------------------------------- To run more than one server (API, Engine, or Task Executor) on the same process, perform the following command in a shell:: $ mistral-server --server api,engine --config-file The --server command line option can be a comma delimited list. The valid options are "all" (by default if not specified) or any combination of "api", "engine", and "executor". It's important to note that the "fake" transport for the rpc_backend defined in the config file should only be used if "all" the Mistral servers are launched on the same process. Otherwise, messages do not get delivered if the Mistral servers are launched on different processes because the "fake" transport is using an in process queue. Mistral And Docker ------------------ Please first refer `installation steps for docker `_. To build the image from the mistral source, change directory to the root directory of the Mistral git repository and run:: $ docker build -t . In case you want pre-built image, you can download it from `openstack tarballs source `_. To load this image to docker registry, please run following command:: $ docker load -i '' The Mistral Docker image is configured to store the database in the user's home directory. For persistence of these data, you may want to keep this directory outside of the container. This may be done by the following steps:: $ sudo mkdir '' $ docker run -it -v \ '':/home/mistral More about docker: https://www.docker.com/ **NOTE:** This docker image uses **SQLite** database. So, it cannot be used for production environment. If you want to use this for production environment, then put customized mistral.conf to ''. Mistral Client Installation --------------------------- Please refer to :doc:`Mistral Client / CLI Guide <../cli/index>` mistral-6.0.0/doc/source/install/mistralclient_guide.rst0000666000175100017510000001270513245513261023521 0ustar zuulzuul00000000000000Mistral Client Installation Guide ================================= To install ``python-mistralclient``, it is required to have ``pip`` (in most cases). Make sure that ``pip`` is installed. Then type:: $ pip install python-mistralclient Or, if it is needed to install ``python-mistralclient`` from master branch, type:: $ pip install git+https://github.com/openstack/python-mistralclient.git After ``python-mistralclient`` is installed you will see command ``mistral`` in your environment. Configure authentication against Keystone ----------------------------------------- If Keystone is used for authentication in Mistral, then the environment should have auth variables:: $ export OS_AUTH_URL=http://:5000/v2.0 $ export OS_TENANT_NAME=tenant $ export OS_USERNAME=admin $ export OS_PASSWORD=secret $ export OS_MISTRAL_URL=http://:8989/v2 ( optional, by default URL=http://localhost:8989/v2) and in the case when you are authenticating against keystone over https:: $ export OS_CACERT= .. note:: In client, we can use both Keystone auth versions - v2.0 and v3. But server supports only v3. You can see the list of available commands by typing:: $ mistral --help To make sure Mistral client works, type:: $ mistral workbook-list Configure authentication against Keycloak ----------------------------------------- Mistral also supports authentication against Keycloak server via OpenID Connect protocol. In order to use it on the client side the environment should look as follows:: $ export MISTRAL_AUTH_TYPE=keycloak-oidc $ export OS_AUTH_URL=https://:/auth $ export OS_TENANT_NAME=my_keycloak_realm $ export OS_USERNAME=admin $ export OS_PASSWORD=secret $ export OPENID_CLIENT_ID=my_keycloak_client $ export OPENID_CLIENT_SECRET=my_keycloak_client_secret $ export OS_MISTRAL_URL=http://:8989/v2 (optional, by default URL=http://localhost:8989/v2) .. note:: Variables OS_TENANT_NAME, OS_USERNAME, OS_PASSWORD are used for both Keystone and Keycloak authentication. OS_TENANT_NAME in case of Keycloak needs to correspond a Keycloak realm. Unlike Keystone, Keycloak requires to register a client that access some resources (Mistral server in our case) protected by Keycloak in advance. For this reason, OPENID_CLIENT_ID and OPENID_CLIENT_SECRET variables should be assigned with correct values as registered in Keycloak. Similar to Keystone OS_CACERT variable can also be added to provide a certification for SSL/TLS verification:: $ export OS_CACERT= In order to disable SSL/TLS certificate verification MISTRALCLIENT_INSECURE variable needs to be set to True:: $ export MISTRALCLIENT_INSECURE=True Targeting non-preconfigured clouds ---------------------------------- Mistral is capable of executing workflows on external OpenStack clouds, different from the one defined in the `mistral.conf` file in the `keystone_authtoken` section. (More detail in the :doc:`/configuration/index`). For example, if the mistral server is configured to authenticate with the `http://keystone1.example.com` cloud and the user wants to execute the workflow on the `http://keystone2.example.com` cloud. The mistral.conf will look like:: [keystone_authtoken] auth_uri = http://keystone1.example.com:5000/v3 ... The client side parameters will be:: $ export OS_AUTH_URL=http://keystone1.example.com:5000/v3 $ export OS_USERNAME=mistral_user ... $ export OS_TARGET_AUTH_URL=http://keystone2.example.com:5000/v3 $ export OS_TARGET_USERNAME=cloud_user ... .. note:: Every `OS_*` parameter has an `OS_TARGET_*` correspondent. For more detail, check out `mistral --help` The `OS_*` parameters are used to authenticate and authorize the user with Mistral, that is, to check if the user is allowed to utilize the Mistral service. Whereas the `OS_TARGET_*` parameters are used to define the user that executes the workflow on the external cloud, keystone2.example.com/. Use cases ^^^^^^^^^ **Authenticate in Mistral and execute OpenStack actions with different users** As a user of Mistral, I want to execute a workflow with a different user on the cloud. **Execute workflows on any OpenStack cloud** As a user of Mistral, I want to execute a workflow on a cloud of my choice. Special cases ^^^^^^^^^^^^^ **Using Mistral with zero OpenStack configuration**: With the targeting feature, it is possible to execute a workflow on any arbitrary cloud without additional configuration on the Mistral server side. If authentication is turned off in the Mistral server (Pecan's `auth_enable = False` option in `mistral.conf`), there is no need to set the `keystone_authtoken` section. It is possible to have Mistral use an external OpenStack cloud even when it isn't deployed in an OpenStack environment (i.e. no Keystone integration). With this setup, the following call will return the heat stack list:: $ mistral \ --os-target-auth-url=http://keystone2.example.com:5000/v3 \ --os-target-username=testuser \ --os-target-tenant=testtenant \ --os-target-password="MistralRuleZ" \ run-action heat.stacks_list This setup is particularly useful when Mistral is used in standalone mode, when the Mistral service is not part of the OpenStack cloud and runs separately. Note that only the OS-TARGET-* parameters enable this operation. mistral-6.0.0/doc/source/install/dashboard_guide.rst0000666000175100017510000000407313245513261022575 0ustar zuulzuul00000000000000==================================== Mistral Dashboard Installation Guide ==================================== Mistral dashboard is the plugin for Horizon where it is easily possible to control mistral objects by interacting with web user interface. Setup Instructions ------------------ This instruction assumes that Horizon is already installed and it's installation folder is . Detailed information on how to install Horizon can be found at `Horizon Installation `_ The installation folder of Mistral Dashboard will be referred to as . The following should get you started: 1. Clone the repository into your local OpenStack directory:: $ git clone https://github.com/openstack/mistral-dashboard.git 2. Install mistral-dashboard:: $ sudo pip install -e Or if you're planning to run Horizon server in a virtual environment (see below):: $ tox -evenv -- pip install -e ../mistral-dashboard/ and then:: $ cp -b /mistraldashboard/enabled/_50_mistral.py \ /openstack_dashboard/local/enabled/_50_mistral.py 3. Since Mistral only supports Identity v3, you must ensure that the dashboard points the proper OPENSTACK_KEYSTONE_URL in /openstack_dashboard/local/local_settings.py file:: OPENSTACK_API_VERSIONS = { "identity": 3, } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST 4. Also, make sure you have changed OPENSTACK_HOST to point to your Keystone server and check all endpoints are accessible. You may want to change OPENSTACK_ENDPOINT_TYPE to "publicURL" if some of them are not. 5. When you're ready, you would need to either restart your apache:: $ sudo service apache2 restart or run the development server (in case you have decided to use local horizon):: $ cd ../horizon/ $ tox -evenv -- python manage.py runserver Debug instructions ------------------ Please refer to :doc:`Mistral Troubleshooting <../contributor/troubleshooting>` mistral-6.0.0/doc/source/install/index.rst0000666000175100017510000000020513245513261020571 0ustar zuulzuul00000000000000Mistral User Guide ================== .. toctree:: :maxdepth: 1 installation_guide dashboard_guide mistralclient_guide mistral-6.0.0/doc/source/_theme/0000775000175100017510000000000013245513604016525 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/_theme/theme.conf0000666000175100017510000000010713245513261020475 0ustar zuulzuul00000000000000[theme] inherit = nature stylesheet = nature.css pygments_style = tangomistral-6.0.0/doc/source/_theme/layout.html0000666000175100017510000000020513245513261020726 0ustar zuulzuul00000000000000{% extends "basic/layout.html" %} {% set css_files = css_files + ['_static/tweaks.css'] %} {% block relbar1 %}{% endblock relbar1 %}mistral-6.0.0/doc/source/architecture.rst0000666000175100017510000000332313245513261020502 0ustar zuulzuul00000000000000Mistral Architecture ==================== Mistral is OpenStack workflow service. The main aim of the project is to provide capability to define, execute and manage tasks and workflows without writing code. Basic concepts ~~~~~~~~~~~~~~ A few basic concepts that one has to understand before going through the Mistral architecture are given below: * Workflow - consists of tasks (at least one) describing what exact steps should be made during workflow execution. * Task - an activity executed within the workflow definition. * Action - work done when an exact task is triggered. Mistral components ~~~~~~~~~~~~~~~~~~ Mistral is composed of the following major components: * API Server * Engine * Task Executors * Scheduler * Persistence The following diagram illustrates the architecture of mistral: .. image:: img/mistral_architecture.png API server ---------- The API server exposes REST API to operate and monitor the workflow executions. Engine ------ The Engine picks up the workflows from the workflow queue. It handles the control and dataflow of workflow executions. It also computes which tasks are ready and places them in a task queue. It passes the data from task to task, deals with condition transitions, etc. Task Executors -------------- The Task Executor executes task Actions. It picks up the tasks from the queue, run actions, and sends results back to the engine. Scheduler --------- The scheduler stores and executes delayed calls. It is the important Mistral component since it interacts with engine and executors. It also triggers workflows on events (e.g., periodic cron event) Persistence ----------- The persistence stores workflow definitions, current execution states, and past execution results. mistral-6.0.0/doc/source/cli/0000775000175100017510000000000013245513604016033 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/cli/index.rst0000666000175100017510000000667613245513261017714 0ustar zuulzuul00000000000000Mistral Client Commands Guide ============================= The Mistral CLI can be used with ``mistral`` command or via `OpenStackClient `_. Mistral Client -------------- The best way to learn about all the commands and arguments that are expected is to use the ``mistral help`` command. .. code-block:: bash $ mistral help usage: mistral [--version] [-v] [--log-file LOG_FILE] [-q] [-h] [--debug] [--os-mistral-url MISTRAL_URL] [--os-mistral-version MISTRAL_VERSION] [--os-mistral-service-type SERVICE_TYPE] ... It can also be used with the name of a sub-command. .. code-block:: bash $ mistral help execution-create usage: mistral execution-create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--max-width ] [--print-empty] [--noindent] [--prefix PREFIX] [-d DESCRIPTION] workflow_identifier [workflow_input] [params] Create new execution. positional arguments: workflow_identifier Workflow ID or name. Workflow name will be deprecated since Mitaka. ... OpenStack Client ---------------- OpenStack client works in a similar way, the command ``openstack help`` shows all the available commands and then ``openstack help `` will show the detailed usage. The full list of Mistral commands that are registered with OpenStack client can be listed with ``openstack command list``. By default it will list all commands grouped together, but we can specify only the Mistral command group. .. code-block:: bash $ openstack command list --group openstack.workflow_engine.v2 +------------------------------+-----------------------------------+ | Command Group | Commands | +------------------------------+-----------------------------------+ | openstack.workflow_engine.v2 | action definition create | | | action definition definition show | | | action definition delete | | | action definition list | | | action definition show | | | action definition update | | | action execution delete | ... Then detailed help output can be requested for an individual command. .. code-block:: bash $ openstack help workflow execution create usage: openstack workflow execution create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--max-width ] [--print-empty] [--noindent] [--prefix PREFIX] [-d DESCRIPTION] workflow_identifier [workflow_input] [params] Create new execution. positional arguments: workflow_identifier Workflow ID or name. Workflow name will be deprecated since Mitaka. workflow_input Workflow input params Workflow additional parameters mistral-6.0.0/doc/source/admin/0000775000175100017510000000000013245513604016354 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/admin/upgrade_guide.rst0000666000175100017510000000432713245513261021721 0ustar zuulzuul00000000000000Mistral Upgrade Guide ===================== Database upgrade ---------------- The migrations in ``alembic_migrations/versions`` contain the changes needed to migrate between Mistral database revisions. A migration occurs by executing a script that details the changes needed to upgrade the database. The migration scripts are ordered so that multiple scripts can run sequentially. The scripts are executed by Mistral's migration wrapper which uses the Alembic library to manage the migration. Mistral supports migration from Kilo or later. You can upgrade to the latest database version via: :: $ mistral-db-manage --config-file /path/to/mistral.conf upgrade head You can populate the database with standard actions and workflows: :: $ mistral-db-manage --config-file /path/to/mistral.conf populate To check the current database version: :: $ mistral-db-manage --config-file /path/to/mistral.conf current To create a script to run the migration offline: :: $ mistral-db-manage --config-file /path/to/mistral.conf upgrade head --sql To run the offline migration between specific migration versions: :: $ mistral-db-manage --config-file /path/to/mistral.conf upgrade : --sql Upgrade the database incrementally: :: $ mistral-db-manage --config-file /path/to/mistral.conf upgrade --delta <# of revs> Or, upgrade the database to one newer revision: :: $ mistral-db-manage --config-file /path/to/mistral.conf upgrade +1 Create new revision: :: $ mistral-db-manage --config-file /path/to/mistral.conf revision -m "description of revision" --autogenerate Create a blank file: :: $ mistral-db-manage --config-file /path/to/mistral.conf revision -m "description of revision" This command does not perform any migrations, it only sets the revision. Revision may be any existing revision. Use this command carefully. :: $ mistral-db-manage --config-file /path/to/mistral.conf stamp To verify that the timeline does branch, you can run this command: :: $ mistral-db-manage --config-file /path/to/mistral.conf check_migration If the migration path has branch, you can find the branch point via: :: $ mistral-db-manage --config-file /path/to/mistral.conf history mistral-6.0.0/doc/source/admin/index.rst0000666000175100017510000000013213245513261020212 0ustar zuulzuul00000000000000Mistral Admin Guide ===================== .. toctree:: :maxdepth: 1 upgrade_guide mistral-6.0.0/doc/source/_templates/0000775000175100017510000000000013245513604017421 5ustar zuulzuul00000000000000mistral-6.0.0/doc/source/_templates/sidebarlinks.html0000666000175100017510000000046413245513261022766 0ustar zuulzuul00000000000000

Useful Links

{% if READTHEDOCS %} {% endif %}mistral-6.0.0/playbooks/0000775000175100017510000000000013245513604015222 5ustar zuulzuul00000000000000mistral-6.0.0/playbooks/rally/0000775000175100017510000000000013245513604016345 5ustar zuulzuul00000000000000mistral-6.0.0/playbooks/rally/run.yaml0000666000175100017510000000045513245513262020043 0ustar zuulzuul00000000000000- hosts: all tasks: - name: Run Devstack include_role: name: run-devstack - name: Run rally shell: cmd: | ./tests/ci/rally-gate.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/src/{{ zuul.project.canonical_name }}/../rally' mistral-6.0.0/playbooks/legacy/0000775000175100017510000000000013245513604016466 5ustar zuulzuul00000000000000mistral-6.0.0/playbooks/legacy/mistral-ha/0000775000175100017510000000000013245513604020527 5ustar zuulzuul00000000000000mistral-6.0.0/playbooks/legacy/mistral-ha/run.yaml0000666000175100017510000000411113245513262022216 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job legacy-mistral-ha from old job gate-mistral-ha-ubuntu-xenial-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x CLONEMAP=`mktemp` function cleanup { # In cases where zuul-cloner is aborted during a git # clone operation, git will remove the git work tree in # its cleanup. The work tree in these jobs is the # workspace directory, which means that subsequent # jenkins post-build actions can not run because the # workspace has been removed. # To reduce the likelihood of this having an impact, # recreate the workspace directory if needed mkdir -p $WORKSPACE rm -f $CLONEMAP } trap cleanup EXIT cat > $CLONEMAP << EOF clonemap: - name: $ZUUL_PROJECT dest: . EOF /usr/zuul-env/bin/zuul-cloner -m $CLONEMAP --cache-dir /opt/git \ git://git.openstack.org $ZUUL_PROJECT executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: /usr/local/jenkins/slave_scripts/install-distro-packages.sh chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | # TODO: this is a temporary solution that puts all installation # code into a script residing in mistral repo just for more # convenient debugging (since we will be able to send patchsets to # mistral with "check experimental" and trigger the gate). After # it's ready it'll be better to create a special builder in this # file. ha_gate/install.sh ha_gate/run_tests.sh chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' mistral-6.0.0/playbooks/docker-buildimage/0000775000175100017510000000000013245513604020571 5ustar zuulzuul00000000000000mistral-6.0.0/playbooks/docker-buildimage/post.yaml0000666000175100017510000000123313245513262022443 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Ensure artifacts directory exists file: path: '{{ zuul.executor.work_root }}/artifacts' state: directory delegate_to: localhost - name: Copy files from {{ ansible_user_dir }}/src/{{ zuul.project.canonical_name }} on node synchronize: src: '{{ ansible_user_dir }}/src/{{ zuul.project.canonical_name }}/' dest: '{{ zuul.executor.work_root }}/artifacts/images' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/mistral-docker.tar.gz - --include=*/ - --exclude=* - --prune-empty-dirs mistral-6.0.0/playbooks/docker-buildimage/run.yaml0000666000175100017510000000024213245513262022261 0ustar zuulzuul00000000000000- hosts: all tasks: - shell: cmd: | ./docker_image_build.sh chdir: '{{ ansible_user_dir }}/src/{{ zuul.project.canonical_name }}' mistral-6.0.0/devstack/0000775000175100017510000000000013245513604015023 5ustar zuulzuul00000000000000mistral-6.0.0/devstack/README.rst0000666000175100017510000000112513245513261016512 0ustar zuulzuul00000000000000============================ Enabling Mistral in Devstack ============================ 1. Download DevStack:: git clone https://github.com/openstack-dev/devstack.git cd devstack 2. Add this repo as an external repository in ``local.conf`` file:: > cat local.conf [[local|localrc]] enable_plugin mistral https://github.com/openstack/mistral To use stable branches, make sure devstack is on that branch, and specify the branch name to enable_plugin, for example:: enable_plugin mistral https://github.com/openstack/mistral stable/pike 3. run ``stack.sh`` mistral-6.0.0/devstack/settings0000666000175100017510000000323513245513261016612 0ustar zuulzuul00000000000000# Devstack settings # We have to add Mistral to enabled services for run_process to work # "mistral" should be always enabled # To run services in separate processes and screens need to write: # enable_service mistral mistral-api mistral-engine mistral-executor # To run all services in one screen as a one process need to write: # enable_service mistral # All other combinations of services like 'mistral mistral-api' or 'mistral mistral-api mistral-engine' # is an incorrect way to run services and all services by default will run in one screen enable_service mistral mistral-api mistral-engine mistral-executor mistral-event-engine # Set up default repos MISTRAL_REPO=${MISTRAL_REPO:-${GIT_BASE}/openstack/mistral.git} MISTRAL_BRANCH=${MISTRAL_BRANCH:-master} MISTRAL_DASHBOARD_REPO=${MISTRAL_DASHBOARD_REPO:-${GIT_BASE}/openstack/mistral-dashboard.git} MISTRAL_DASHBOARD_BRANCH=${MISTRAL_DASHBOARD_BRANCH:-master} MISTRAL_PYTHONCLIENT_REPO=${MISTRAL_PYTHONCLIENT_REPO:-${GIT_BASE}/openstack/python-mistralclient.git} MISTRAL_PYTHONCLIENT_BRANCH=${MISTRAL_PYTHONCLIENT_BRANCH:-master} MISTRAL_PYTHONCLIENT_DIR=$DEST/python-mistralclient # Set up default directories MISTRAL_DIR=$DEST/mistral MISTRAL_DASHBOARD_DIR=$DEST/mistral-dashboard MISTRAL_CONF_DIR=${MISTRAL_CONF_DIR:-/etc/mistral} MISTRAL_CONF_FILE=${MISTRAL_CONF_DIR}/mistral.conf MISTRAL_DEBUG=${MISTRAL_DEBUG:-True} MISTRAL_AUTH_CACHE_DIR=${MISTRAL_AUTH_CACHE_DIR:-/var/cache/mistral} MISTRAL_SERVICE_HOST=${MISTRAL_SERVICE_HOST:-$SERVICE_HOST} MISTRAL_SERVICE_PORT=${MISTRAL_SERVICE_PORT:-8989} MISTRAL_SERVICE_PROTOCOL=${MISTRAL_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL} MISTRAL_ADMIN_USER=${MISTRAL_ADMIN_USER:-mistral} mistral-6.0.0/devstack/plugin.sh0000777000175100017510000002031413245513261016661 0ustar zuulzuul00000000000000# ``stack.sh`` calls the entry points in this order: # # install_mistral # install_python_mistralclient # configure_mistral # start_mistral # stop_mistral # cleanup_mistral # Save trace setting XTRACE=$(set +o | grep xtrace) set -o xtrace # Defaults # -------- # Support entry points installation of console scripts if [[ -d $MISTRAL_DIR/bin ]]; then MISTRAL_BIN_DIR=$MISTRAL_DIR/bin else MISTRAL_BIN_DIR=$(get_python_exec_prefix) fi # Toggle for deploying Mistral API under HTTPD + mod_wsgi MISTRAL_USE_MOD_WSGI=${MISTRAL_USE_MOD_WSGI:-True} MISTRAL_FILES_DIR=$MISTRAL_DIR/devstack/files # create_mistral_accounts - Set up common required mistral accounts # # Tenant User Roles # ------------------------------ # service mistral admin function create_mistral_accounts { if ! is_service_enabled key; then return fi create_service_user "mistral" "admin" get_or_create_service "mistral" "workflowv2" "Workflow Service v2" get_or_create_endpoint "workflowv2" \ "$REGION_NAME" \ "$MISTRAL_SERVICE_PROTOCOL://$MISTRAL_SERVICE_HOST:$MISTRAL_SERVICE_PORT/v2" \ "$MISTRAL_SERVICE_PROTOCOL://$MISTRAL_SERVICE_HOST:$MISTRAL_SERVICE_PORT/v2" \ "$MISTRAL_SERVICE_PROTOCOL://$MISTRAL_SERVICE_HOST:$MISTRAL_SERVICE_PORT/v2" } function mkdir_chown_stack { if [[ ! -d "$1" ]]; then sudo mkdir -p "$1" fi sudo chown $STACK_USER "$1" } # Entry points # ------------ # configure_mistral - Set config files, create data dirs, etc function configure_mistral { # create and clean up auth cache dir mkdir_chown_stack "$MISTRAL_AUTH_CACHE_DIR" rm -f "$MISTRAL_AUTH_CACHE_DIR"/* mkdir_chown_stack "$MISTRAL_CONF_DIR" # Generate Mistral configuration file and configure common parameters. oslo-config-generator --config-file $MISTRAL_DIR/tools/config/config-generator.mistral.conf --output-file $MISTRAL_CONF_FILE iniset $MISTRAL_CONF_FILE DEFAULT debug $MISTRAL_DEBUG # Run all Mistral processes as a single process iniset $MISTRAL_CONF_FILE DEFAULT server all # Mistral Configuration #------------------------- # Setup keystone_authtoken section configure_auth_token_middleware $MISTRAL_CONF_FILE mistral $MISTRAL_AUTH_CACHE_DIR iniset $MISTRAL_CONF_FILE keystone_authtoken auth_uri $KEYSTONE_AUTH_URI_V3 # Setup RabbitMQ credentials iniset_rpc_backend mistral $MISTRAL_CONF_FILE # Configure the database. iniset $MISTRAL_CONF_FILE database connection `database_connection_url mistral` iniset $MISTRAL_CONF_FILE database max_overflow -1 iniset $MISTRAL_CONF_FILE database max_pool_size 1000 # Configure action execution deletion policy iniset $MISTRAL_CONF_FILE api allow_action_execution_deletion True if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then setup_colorized_logging $MISTRAL_CONF_FILE DEFAULT tenant user fi if [ "$MISTRAL_RPC_IMPLEMENTATION" ]; then iniset $MISTRAL_CONF_FILE DEFAULT rpc_implementation $MISTRAL_RPC_IMPLEMENTATION fi if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then _config_mistral_apache_wsgi fi } # init_mistral - Initialize the database function init_mistral { # (re)create Mistral database recreate_database mistral utf8 $PYTHON $MISTRAL_DIR/tools/sync_db.py --config-file $MISTRAL_CONF_FILE } # install_mistral - Collect source and prepare function install_mistral { setup_develop $MISTRAL_DIR # installing python-nose. real_install_package python-nose if is_service_enabled horizon; then _install_mistraldashboard fi if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then install_apache_wsgi fi } function _install_mistraldashboard { git_clone $MISTRAL_DASHBOARD_REPO $MISTRAL_DASHBOARD_DIR $MISTRAL_DASHBOARD_BRANCH setup_develop $MISTRAL_DASHBOARD_DIR ln -fs $MISTRAL_DASHBOARD_DIR/mistraldashboard/enabled/_50_mistral.py $HORIZON_DIR/openstack_dashboard/local/enabled/_50_mistral.py } function install_mistral_pythonclient { if use_library_from_git "python-mistralclient"; then git_clone $MISTRAL_PYTHONCLIENT_REPO $MISTRAL_PYTHONCLIENT_DIR $MISTRAL_PYTHONCLIENT_BRANCH local tags=`git --git-dir=$MISTRAL_PYTHONCLIENT_DIR/.git tag -l | grep 2015` if [ ! "$tags" = "" ]; then git --git-dir=$MISTRAL_PYTHONCLIENT_DIR/.git tag -d $tags fi setup_develop $MISTRAL_PYTHONCLIENT_DIR fi } # start_mistral - Start running processes function start_mistral { # If the site is not enabled then we are in a grenade scenario local enabled_site_file enabled_site_file=$(apache_site_config_for mistral-api) if is_service_enabled mistral-api && is_service_enabled mistral-engine && is_service_enabled mistral-executor && is_service_enabled mistral-event-engine ; then echo_summary "Installing all mistral services in separate processes" if [ -f ${enabled_site_file} ] && [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then enable_apache_site mistral-api restart_apache_server else run_process mistral-api "$MISTRAL_BIN_DIR/mistral-server --server api --config-file $MISTRAL_CONF_DIR/mistral.conf" fi run_process mistral-engine "$MISTRAL_BIN_DIR/mistral-server --server engine --config-file $MISTRAL_CONF_DIR/mistral.conf" run_process mistral-executor "$MISTRAL_BIN_DIR/mistral-server --server executor --config-file $MISTRAL_CONF_DIR/mistral.conf" run_process mistral-event-engine "$MISTRAL_BIN_DIR/mistral-server --server event-engine --config-file $MISTRAL_CONF_DIR/mistral.conf" else echo_summary "Installing all mistral services in one process" run_process mistral "$MISTRAL_BIN_DIR/mistral-server --server all --config-file $MISTRAL_CONF_DIR/mistral.conf" fi } # stop_mistral - Stop running processes function stop_mistral { local serv for serv in mistral mistral-engine mistral-executor mistral-event-engine; do stop_process $serv done if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then disable_apache_site mistral-api restart_apache_server else stop_process mistral-api fi } function cleanup_mistral { if is_service_enabled horizon; then _mistral_cleanup_mistraldashboard fi if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then _mistral_cleanup_apache_wsgi fi sudo rm -rf $MISTRAL_CONF_DIR } function _mistral_cleanup_mistraldashboard { rm -f $HORIZON_DIR/openstack_dashboard/local/enabled/_50_mistral.py } function _mistral_cleanup_apache_wsgi { sudo rm -f $(apache_site_config_for mistral-api) } # _config_mistral_apache_wsgi() - Set WSGI config files for Mistral function _config_mistral_apache_wsgi { local mistral_apache_conf mistral_apache_conf=$(apache_site_config_for mistral-api) local mistral_ssl="" local mistral_certfile="" local mistral_keyfile="" local mistral_api_port=$MISTRAL_SERVICE_PORT local venv_path="" sudo cp $MISTRAL_FILES_DIR/apache-mistral-api.template $mistral_apache_conf sudo sed -e " s|%PUBLICPORT%|$mistral_api_port|g; s|%APACHE_NAME%|$APACHE_NAME|g; s|%MISTRAL_BIN_DIR%|$MISTRAL_BIN_DIR|g; s|%API_WORKERS%|$API_WORKERS|g; s|%SSLENGINE%|$mistral_ssl|g; s|%SSLCERTFILE%|$mistral_certfile|g; s|%SSLKEYFILE%|$mistral_keyfile|g; s|%USER%|$STACK_USER|g; s|%VIRTUALENV%|$venv_path|g " -i $mistral_apache_conf } if is_service_enabled mistral; then if [[ "$1" == "stack" && "$2" == "install" ]]; then echo_summary "Installing mistral" install_mistral install_mistral_pythonclient elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then echo_summary "Configuring mistral" create_mistral_accounts configure_mistral elif [[ "$1" == "stack" && "$2" == "extra" ]]; then echo_summary "Initializing mistral" init_mistral start_mistral fi if [[ "$1" == "unstack" ]]; then echo_summary "Shutting down mistral" stop_mistral fi if [[ "$1" == "clean" ]]; then echo_summary "Cleaning mistral" cleanup_mistral fi fi # Restore xtrace $XTRACE # Local variables: # mode: shell-script # End: mistral-6.0.0/devstack/files/0000775000175100017510000000000013245513604016125 5ustar zuulzuul00000000000000mistral-6.0.0/devstack/files/apache-mistral-api.template0000666000175100017510000000151113245513261023322 0ustar zuulzuul00000000000000Listen %PUBLICPORT% WSGIDaemonProcess mistral-api processes=%API_WORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV% WSGIProcessGroup mistral-api WSGIScriptAlias / %MISTRAL_BIN_DIR%/mistral-wsgi-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On AllowEncodedSlashes On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/%APACHE_NAME%/mistral_api.log CustomLog /var/log/%APACHE_NAME%/mistral_api_access.log combined %SSLENGINE% %SSLCERTFILE% %SSLKEYFILE% = 2.4> Require all granted Order allow,deny Allow from all mistral-6.0.0/run_tests.sh0000777000175100017510000002240113245513262015605 0ustar zuulzuul00000000000000#!/bin/bash set -eu function usage { echo "Usage: $0 [OPTION]..." echo "Run Mistral's test suite(s)" echo "" echo " -V, --virtual-env Always use virtualenv. Install automatically if not present" echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment" echo " -s, --no-site-packages Isolate the virtualenv from the global Python environment" echo " -r, --recreate-db Recreate the test database (deprecated, as this is now the default)." echo " -n, --no-recreate-db Don't recreate the test database." echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added." echo " -u, --update Update the virtual environment with any newer package versions" echo " -p, --pep8 Just run PEP8 and HACKING compliance check" echo " -P, --no-pep8 Don't run static code checks" echo " -c, --coverage Generate coverage report" echo " -d, --debug Run tests with testtools instead of testr. This allows you to use the debugger." echo " -h, --help Print this usage message" echo " --virtual-env-path Location of the virtualenv directory" echo " Default: \$(pwd)" echo " --virtual-env-name Name of the virtualenv directory" echo " Default: .venv" echo " --tools-path Location of the tools directory" echo " Default: \$(pwd)" echo " --db-type Database type" echo " Default: sqlite" echo " --parallel Determines whether the tests are run in one thread or not" echo " Default: false" echo "" echo "Note: with no options specified, the script will try to run the tests in a virtual environment," echo " If no virtualenv is found, the script will ask if you would like to create one. If you " echo " prefer to run tests NOT in a virtual environment, simply pass the -N option." exit } function process_options { i=1 while [ $i -le $# ]; do case "${!i}" in -h|--help) usage;; -V|--virtual-env) always_venv=1; never_venv=0;; -N|--no-virtual-env) always_venv=0; never_venv=1;; -s|--no-site-packages) no_site_packages=1;; -r|--recreate-db) recreate_db=1;; -n|--no-recreate-db) recreate_db=0;; -f|--force) force=1;; -u|--update) update=1;; -p|--pep8) just_pep8=1;; -P|--no-pep8) no_pep8=1;; -c|--coverage) coverage=1;; -d|--debug) debug=1;; --virtual-env-path) (( i++ )) venv_path=${!i} ;; --virtual-env-name) (( i++ )) venv_dir=${!i} ;; --tools-path) (( i++ )) tools_path=${!i} ;; --db-type) (( i++ )) db_type=${!i} ;; --parallel) (( i++ )) parallel=${!i} ;; -*) testropts="$testropts ${!i}";; *) testrargs="$testrargs ${!i}" esac (( i++ )) done } db_type=${db_type:-sqlite} parallel=${parallel:-false} tool_path=${tools_path:-$(pwd)} venv_path=${venv_path:-$(pwd)} venv_dir=${venv_name:-.venv} with_venv=tools/with_venv.sh always_venv=0 never_venv=0 force=0 no_site_packages=0 installvenvopts= testrargs= testropts= wrapper="" just_pep8=0 no_pep8=0 coverage=0 debug=0 recreate_db=1 update=0 LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C CI_PROJECT=${CI_PROJECT:-""} process_options $@ # Make our paths available to other scripts we call export venv_path export venv_dir export venv_name export tools_dir export venv=${venv_path}/${venv_dir} if [ $no_site_packages -eq 1 ]; then installvenvopts="--no-site-packages" fi function setup_db { case ${db_type} in sqlite ) rm -f tests.sqlite ;; postgresql ) echo "Setting up Mistral DB in PostgreSQL" # If CI_PROJECT is specified it means that this script is executing on # Jenkins gate, so we should use already created postgresql db if [ -n "$CI_PROJECT"] then echo "PostgreSQL is initialized. 'openstack_citest' db will be used." dbname="openstack_citest" username="openstack_citest" password="openstack_citest" else # Create the user and database. # Assume trust is setup on localhost in the postgresql config file. dbname="mistral" username="mistral" password="m1stral" pg_command "SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = '$dbname' AND pid <> pg_backend_pid();" pg_command "DROP DATABASE IF EXISTS $dbname;" pg_command "DROP USER IF EXISTS $username;" pg_command "CREATE USER $username WITH ENCRYPTED PASSWORD '$password';" pg_command "CREATE DATABASE $dbname OWNER $username;" fi ;; esac } function pg_command { command=$1 sudo -u postgres psql -h localhost -c "${command}" } function setup_db_pylib { case ${db_type} in postgresql ) echo "Installing python library for PostgreSQL." ${wrapper} pip install psycopg2 ;; esac } function setup_db_cfg { case ${db_type} in sqlite ) rm -f .mistral.conf ;; postgresql ) oslo-config-generator --config-file ./tools/config/config-generator.mistral.conf --output-file .mistral.conf sed -i "s/#connection = /connection = postgresql:\/\/$username:$password@localhost\/$dbname/g" .mistral.conf ;; esac } function cleanup { rm -f .mistral.conf } function run_tests { # Cleanup *pyc ${wrapper} find . -type f -name "*.pyc" -delete if [ $debug -eq 1 ]; then if [ "$testropts" = "" ] && [ "$testrargs" = "" ]; then # Default to running all tests if specific test is not # provided. testrargs="discover ./mistral/tests/unit" fi ${wrapper} python -m testtools.run $testropts $testrargs # Short circuit because all of the testr and coverage stuff # below does not make sense when running testtools.run for # debugging purposes. return $? fi if [ $coverage -eq 1 ]; then TESTRTESTS="$TESTRTESTS --coverage" else TESTRTESTS="$TESTRTESTS --slowest" fi # Just run the test suites in current environment set +e testrargs=$(echo "$testrargs" | sed -e's/^\s*\(.*\)\s*$/\1/') if [ $parallel = true ] then runoptions="--subunit" else runoptions="--concurrency=1 --subunit" fi TESTRTESTS="$TESTRTESTS --testr-args='$runoptions $testropts $testrargs'" OS_TEST_PATH=$(echo $testrargs|grep -o 'mistral\.tests[^[:space:]:]*\+'|tr . /) if [ -d "$OS_TEST_PATH" ]; then wrapper="OS_TEST_PATH=$OS_TEST_PATH $wrapper" elif [ -d "$(dirname $OS_TEST_PATH)" ]; then wrapper="OS_TEST_PATH=$(dirname $OS_TEST_PATH) $wrapper" fi echo "Running ${wrapper} $TESTRTESTS" bash -c "${wrapper} $TESTRTESTS | ${wrapper} subunit2pyunit" RESULT=$? set -e copy_subunit_log cleanup if [ $coverage -eq 1 ]; then echo "Generating coverage report in covhtml/" # Don't compute coverage for common code, which is tested elsewhere ${wrapper} coverage combine ${wrapper} coverage html --include='mistral/*' -d covhtml -i fi return $RESULT } function copy_subunit_log { LOGNAME=$(cat .testrepository/next-stream) LOGNAME=$(($LOGNAME - 1)) LOGNAME=".testrepository/${LOGNAME}" cp $LOGNAME subunit.log } function run_pep8 { echo "Running flake8 ..." ${wrapper} flake8 } TESTRTESTS="python setup.py testr" if [ $never_venv -eq 0 ] then # Remove the virtual environment if --force used if [ $force -eq 1 ]; then echo "Cleaning virtualenv..." rm -rf ${venv} fi if [ $update -eq 1 ]; then echo "Updating virtualenv..." python tools/install_venv.py $installvenvopts fi if [ -e ${venv} ]; then wrapper="${with_venv}" else if [ $always_venv -eq 1 ]; then # Automatically install the virtualenv python tools/install_venv.py $installvenvopts wrapper="${with_venv}" else echo -e "No virtual environment found...create one? (Y/n) \c" read use_ve if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then # Install the virtualenv and run the test suite in it python tools/install_venv.py $installvenvopts wrapper=${with_venv} fi fi fi fi # Delete old coverage data from previous runs if [ $coverage -eq 1 ]; then ${wrapper} coverage erase fi if [ $just_pep8 -eq 1 ]; then run_pep8 exit fi if [ $recreate_db -eq 1 ]; then setup_db fi setup_db_pylib setup_db_cfg run_tests # NOTE(sirp): we only want to run pep8 when we're running the full-test suite, # not when we're running tests individually. To handle this, we need to # distinguish between options (testropts), which begin with a '-', and # arguments (testrargs). if [ -z "$testrargs" ]; then if [ $no_pep8 -eq 0 ]; then run_pep8 fi fi mistral-6.0.0/LICENSE0000666000175100017510000002363613245513261014237 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. mistral-6.0.0/HACKING.rst0000666000175100017510000000110613245513261015014 0ustar zuulzuul00000000000000Style Commandments ================== Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ Mistral Specific Commandments ----------------------------- - [M001] Use LOG.warning(). LOG.warn() is deprecated. - [M319] Enforce use of assertTrue/assertFalse - [M320] Enforce use of assertIs/assertIsNot - [M327] Do not use xrange(). xrange() is not compatible with Python 3. Use range() or six.moves.range() instead. - [M328] Python 3: do not use dict.iteritems. - [M329] Python 3: do not use dict.iterkeys. - [M330] Python 3: do not use dict.itervalues. mistral-6.0.0/mistral.egg-info/0000775000175100017510000000000013245513604016364 5ustar zuulzuul00000000000000mistral-6.0.0/mistral.egg-info/SOURCES.txt0000664000175100017510000005371213245513604020260 0ustar zuulzuul00000000000000.coveragerc .testr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE README.rst docker_image_build.sh requirements.txt run_functional_tests.sh run_tests.sh setup.cfg setup.py test-requirements.txt tox.ini api-ref/source/conf.py api-ref/source/index.rst api-ref/source/v2/action.inc api-ref/source/v2/cron-trigger.inc api-ref/source/v2/execution.inc api-ref/source/v2/task.inc api-ref/source/v2/workbook.inc api-ref/source/v2/workflow.inc devstack/README.rst devstack/plugin.sh devstack/settings devstack/files/apache-mistral-api.template doc/source/architecture.rst doc/source/conf.py doc/source/cookbooks.rst doc/source/index.rst doc/source/main_features.rst doc/source/overview.rst doc/source/quickstart.rst doc/source/_templates/sidebarlinks.html doc/source/_theme/layout.html doc/source/_theme/theme.conf doc/source/admin/index.rst doc/source/admin/upgrade_guide.rst doc/source/api/index.rst doc/source/api/v2.rst doc/source/cli/index.rst doc/source/configuration/config-guide.rst doc/source/configuration/index.rst doc/source/configuration/policy-guide.rst doc/source/configuration/samples/index.rst doc/source/configuration/samples/policy-yaml.rst doc/source/contributor/asynchronous_actions.rst doc/source/contributor/creating_custom_action.rst doc/source/contributor/debug.rst doc/source/contributor/devstack.rst doc/source/contributor/extending_yaql.rst doc/source/contributor/index.rst doc/source/contributor/troubleshooting.rst doc/source/img/Mistral_actions.png doc/source/img/Mistral_cron_trigger.png doc/source/img/Mistral_dashboard_debug_config.png doc/source/img/Mistral_dashboard_django_settings.png doc/source/img/Mistral_dashboard_environment_variables.png doc/source/img/Mistral_direct_workflow.png doc/source/img/Mistral_reverse_workflow.png doc/source/img/Mistral_workbook_namespacing.png doc/source/img/Pycharm_run_config_menu.png doc/source/img/mistral_architecture.png doc/source/install/dashboard_guide.rst doc/source/install/index.rst doc/source/install/installation_guide.rst doc/source/install/mistralclient_guide.rst doc/source/terminology/actions.rst doc/source/terminology/cron_triggers.rst doc/source/terminology/executions.rst doc/source/terminology/index.rst doc/source/terminology/workbooks.rst doc/source/terminology/workflows.rst doc/source/user/wf_lang_v2.rst etc/README.mistral.conf etc/event_definitions.yml.sample etc/logging.conf.sample etc/logging.conf.sample.rotating etc/policy.json etc/wf_trace_logging.conf.sample etc/wf_trace_logging.conf.sample.rotating functionaltests/post_test_hook.sh functionaltests/run_tests.sh mistral/__init__.py mistral/_i18n.py mistral/config.py mistral/context.py mistral/exceptions.py mistral/messaging.py mistral/serialization.py mistral/version.py mistral.egg-info/PKG-INFO mistral.egg-info/SOURCES.txt mistral.egg-info/dependency_links.txt mistral.egg-info/entry_points.txt mistral.egg-info/not-zip-safe mistral.egg-info/pbr.json mistral.egg-info/requires.txt mistral.egg-info/top_level.txt mistral/actions/__init__.py mistral/actions/action_factory.py mistral/actions/action_generator.py mistral/actions/base.py mistral/actions/generator_factory.py mistral/actions/std_actions.py mistral/actions/openstack/__init__.py mistral/actions/openstack/actions.py mistral/actions/openstack/base.py mistral/actions/openstack/mapping.json mistral/actions/openstack/action_generator/__init__.py mistral/actions/openstack/action_generator/base.py mistral/api/__init__.py mistral/api/access_control.py mistral/api/app.py mistral/api/service.py mistral/api/wsgi.py mistral/api/controllers/__init__.py mistral/api/controllers/resource.py mistral/api/controllers/root.py mistral/api/controllers/v2/__init__.py mistral/api/controllers/v2/action.py mistral/api/controllers/v2/action_execution.py mistral/api/controllers/v2/cron_trigger.py mistral/api/controllers/v2/environment.py mistral/api/controllers/v2/event_trigger.py mistral/api/controllers/v2/execution.py mistral/api/controllers/v2/member.py mistral/api/controllers/v2/resources.py mistral/api/controllers/v2/root.py mistral/api/controllers/v2/service.py mistral/api/controllers/v2/task.py mistral/api/controllers/v2/types.py mistral/api/controllers/v2/validation.py mistral/api/controllers/v2/workbook.py mistral/api/controllers/v2/workflow.py mistral/api/hooks/__init__.py mistral/api/hooks/content_type.py mistral/auth/__init__.py mistral/auth/keycloak.py mistral/auth/keystone.py mistral/cmd/__init__.py mistral/cmd/launch.py mistral/db/__init__.py mistral/db/utils.py mistral/db/sqlalchemy/__init__.py mistral/db/sqlalchemy/base.py mistral/db/sqlalchemy/model_base.py mistral/db/sqlalchemy/sqlite_lock.py mistral/db/sqlalchemy/types.py mistral/db/sqlalchemy/migration/__init__.py mistral/db/sqlalchemy/migration/alembic.ini mistral/db/sqlalchemy/migration/cli.py mistral/db/sqlalchemy/migration/alembic_migrations/README.md mistral/db/sqlalchemy/migration/alembic_migrations/__init__.py mistral/db/sqlalchemy/migration/alembic_migrations/env.py mistral/db/sqlalchemy/migration/alembic_migrations/script.py.mako mistral/db/sqlalchemy/migration/alembic_migrations/versions/001_kilo.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/002_kilo.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/003_cron_trigger_constraints.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/004_add_description_for_execution.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/005_increase_execution_columns_size.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/006_add_processed_to_delayed_calls_v2.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/007_move_system_flag_to_base_definition.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/008_increase_size_of_state_info_column.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/009_add_database_indices.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/010_add_resource_members_v2_table.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/011_add_workflow_id_for_execution.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/012_add_event_triggers_v2_table.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/013_split_execution_table_increase_names.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/014_fix_past_scripts_discrepancies.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/015_add_unique_keys_for_non_locking_model.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/016_increase_size_of_task_unique_key.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/017_add_named_lock_table.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/018_increate_task_execution_unique_key_size.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/019_change_scheduler_schema.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/020_add_type_to_task_execution.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/021_increase_env_columns_size.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/022_namespace_support.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/023_add_root_execution_id.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/024_add_composite_index_workflow_execution_id_name.py mistral/db/sqlalchemy/migration/alembic_migrations/versions/__init__.py mistral/db/v2/__init__.py mistral/db/v2/api.py mistral/db/v2/sqlalchemy/__init__.py mistral/db/v2/sqlalchemy/api.py mistral/db/v2/sqlalchemy/filters.py mistral/db/v2/sqlalchemy/models.py mistral/engine/__init__.py mistral/engine/action_handler.py mistral/engine/action_queue.py mistral/engine/actions.py mistral/engine/base.py mistral/engine/default_engine.py mistral/engine/dispatcher.py mistral/engine/engine_server.py mistral/engine/policies.py mistral/engine/task_handler.py mistral/engine/tasks.py mistral/engine/utils.py mistral/engine/workflow_handler.py mistral/engine/workflows.py mistral/event_engine/__init__.py mistral/event_engine/base.py mistral/event_engine/default_event_engine.py mistral/event_engine/event_engine_server.py mistral/executors/__init__.py mistral/executors/base.py mistral/executors/default_executor.py mistral/executors/executor_server.py mistral/executors/remote_executor.py mistral/expressions/__init__.py mistral/expressions/base_expression.py mistral/expressions/jinja_expression.py mistral/expressions/yaql_expression.py mistral/ext/__init__.py mistral/ext/pygmentplugin.py mistral/hacking/__init__.py mistral/hacking/checks.py mistral/lang/__init__.py mistral/lang/base.py mistral/lang/parser.py mistral/lang/types.py mistral/lang/v2/__init__.py mistral/lang/v2/actions.py mistral/lang/v2/base.py mistral/lang/v2/on_clause.py mistral/lang/v2/policies.py mistral/lang/v2/publish.py mistral/lang/v2/retry_policy.py mistral/lang/v2/task_defaults.py mistral/lang/v2/tasks.py mistral/lang/v2/workbook.py mistral/lang/v2/workflows.py mistral/policies/__init__.py mistral/policies/action.py mistral/policies/action_executions.py mistral/policies/base.py mistral/policies/cron_trigger.py mistral/policies/environment.py mistral/policies/event_trigger.py mistral/policies/execution.py mistral/policies/member.py mistral/policies/service.py mistral/policies/task.py mistral/policies/workbook.py mistral/policies/workflow.py mistral/resources/actions/wait_ssh.yaml mistral/resources/workflows/create_instance.yaml mistral/resources/workflows/delete_instance.yaml mistral/rpc/__init__.py mistral/rpc/base.py mistral/rpc/clients.py mistral/rpc/kombu/__init__.py mistral/rpc/kombu/base.py mistral/rpc/kombu/kombu_client.py mistral/rpc/kombu/kombu_hosts.py mistral/rpc/kombu/kombu_listener.py mistral/rpc/kombu/kombu_server.py mistral/rpc/kombu/examples/__init__.py mistral/rpc/kombu/examples/client.py mistral/rpc/kombu/examples/server.py mistral/rpc/oslo/__init__.py mistral/rpc/oslo/oslo_client.py mistral/rpc/oslo/oslo_server.py mistral/service/__init__.py mistral/service/base.py mistral/service/coordination.py mistral/services/__init__.py mistral/services/action_manager.py mistral/services/actions.py mistral/services/expiration_policy.py mistral/services/periodic.py mistral/services/scheduler.py mistral/services/security.py mistral/services/triggers.py mistral/services/workbooks.py mistral/services/workflows.py mistral/tests/__init__.py mistral/tests/resources/action_jinja.yaml mistral/tests/resources/action_v2.yaml mistral/tests/resources/single_wf.yaml mistral/tests/resources/wb_v1.yaml mistral/tests/resources/wb_v2.yaml mistral/tests/resources/wb_with_nested_wf.yaml mistral/tests/resources/wf_action_ex_concurrency.yaml mistral/tests/resources/wf_jinja.yaml mistral/tests/resources/wf_task_ex_concurrency.yaml mistral/tests/resources/wf_v2.yaml mistral/tests/resources/for_wf_namespace/lowest_level_wf.yaml mistral/tests/resources/for_wf_namespace/middle_wf.yaml mistral/tests/resources/for_wf_namespace/top_level_wf.yaml mistral/tests/resources/openstack/action_collection_wb.yaml mistral/tests/resources/openstack/test_mapping.json mistral/tests/resources/workbook/v2/my_workbook.yaml mistral/tests/resources/workbook/v2/workbook_schema_test.yaml mistral/tests/unit/__init__.py mistral/tests/unit/base.py mistral/tests/unit/config.py mistral/tests/unit/test_command_dispatcher.py mistral/tests/unit/test_coordination.py mistral/tests/unit/test_exception_base.py mistral/tests/unit/test_expressions.py mistral/tests/unit/test_launcher.py mistral/tests/unit/test_serialization.py mistral/tests/unit/actions/__init__.py mistral/tests/unit/actions/test_action_manager.py mistral/tests/unit/actions/test_javascript_action.py mistral/tests/unit/actions/test_std_echo_action.py mistral/tests/unit/actions/test_std_email_action.py mistral/tests/unit/actions/test_std_fail_action.py mistral/tests/unit/actions/test_std_http_action.py mistral/tests/unit/actions/openstack/__init__.py mistral/tests/unit/actions/openstack/test_generator.py mistral/tests/unit/actions/openstack/test_openstack_actions.py mistral/tests/unit/api/__init__.py mistral/tests/unit/api/base.py mistral/tests/unit/api/test_access_control.py mistral/tests/unit/api/test_auth.py mistral/tests/unit/api/test_cors_middleware.py mistral/tests/unit/api/test_policies.py mistral/tests/unit/api/test_resource_base.py mistral/tests/unit/api/test_service.py mistral/tests/unit/api/v2/__init__.py mistral/tests/unit/api/v2/test_action_executions.py mistral/tests/unit/api/v2/test_actions.py mistral/tests/unit/api/v2/test_cron_triggers.py mistral/tests/unit/api/v2/test_environment.py mistral/tests/unit/api/v2/test_event_trigger.py mistral/tests/unit/api/v2/test_executions.py mistral/tests/unit/api/v2/test_keycloak_auth.py mistral/tests/unit/api/v2/test_members.py mistral/tests/unit/api/v2/test_root.py mistral/tests/unit/api/v2/test_services.py mistral/tests/unit/api/v2/test_tasks.py mistral/tests/unit/api/v2/test_workbooks.py mistral/tests/unit/api/v2/test_workflows.py mistral/tests/unit/db/__init__.py mistral/tests/unit/db/v2/__init__.py mistral/tests/unit/db/v2/test_db_model.py mistral/tests/unit/db/v2/test_locking.py mistral/tests/unit/db/v2/test_sqlalchemy_db_api.py mistral/tests/unit/db/v2/test_sqlite_transactions.py mistral/tests/unit/db/v2/test_transactions.py mistral/tests/unit/engine/__init__.py mistral/tests/unit/engine/base.py mistral/tests/unit/engine/test_action_context.py mistral/tests/unit/engine/test_action_defaults.py mistral/tests/unit/engine/test_adhoc_actions.py mistral/tests/unit/engine/test_commands.py mistral/tests/unit/engine/test_cron_trigger.py mistral/tests/unit/engine/test_dataflow.py mistral/tests/unit/engine/test_default_engine.py mistral/tests/unit/engine/test_direct_workflow.py mistral/tests/unit/engine/test_direct_workflow_rerun.py mistral/tests/unit/engine/test_direct_workflow_rerun_cancelled.py mistral/tests/unit/engine/test_direct_workflow_with_cycles.py mistral/tests/unit/engine/test_environment.py mistral/tests/unit/engine/test_error_handling.py mistral/tests/unit/engine/test_error_result.py mistral/tests/unit/engine/test_execution_fields_size_limitation.py mistral/tests/unit/engine/test_integrity_check.py mistral/tests/unit/engine/test_javascript_action.py mistral/tests/unit/engine/test_join.py mistral/tests/unit/engine/test_lookup_utils.py mistral/tests/unit/engine/test_noop_task.py mistral/tests/unit/engine/test_policies.py mistral/tests/unit/engine/test_profiler.py mistral/tests/unit/engine/test_race_condition.py mistral/tests/unit/engine/test_reverse_workflow.py mistral/tests/unit/engine/test_reverse_workflow_rerun.py mistral/tests/unit/engine/test_reverse_workflow_rerun_cancelled.py mistral/tests/unit/engine/test_run_action.py mistral/tests/unit/engine/test_safe_rerun.py mistral/tests/unit/engine/test_set_state.py mistral/tests/unit/engine/test_state_info.py mistral/tests/unit/engine/test_subworkflows.py mistral/tests/unit/engine/test_subworkflows_pause_resume.py mistral/tests/unit/engine/test_task_cancel.py mistral/tests/unit/engine/test_task_defaults.py mistral/tests/unit/engine/test_task_pause_resume.py mistral/tests/unit/engine/test_task_publish.py mistral/tests/unit/engine/test_tasks_function.py mistral/tests/unit/engine/test_with_items.py mistral/tests/unit/engine/test_with_items_task.py mistral/tests/unit/engine/test_workflow_cancel.py mistral/tests/unit/engine/test_workflow_resume.py mistral/tests/unit/engine/test_workflow_stop.py mistral/tests/unit/engine/test_workflow_variables.py mistral/tests/unit/engine/test_yaql_functions.py mistral/tests/unit/executors/__init__.py mistral/tests/unit/executors/base.py mistral/tests/unit/executors/test_local_executor.py mistral/tests/unit/executors/test_plugins.py mistral/tests/unit/expressions/__init__.py mistral/tests/unit/expressions/test_jinja_expression.py mistral/tests/unit/expressions/test_yaql_expression.py mistral/tests/unit/hacking/__init__.py mistral/tests/unit/hacking/test_checks.py mistral/tests/unit/lang/__init__.py mistral/tests/unit/lang/test_spec_caching.py mistral/tests/unit/lang/v2/__init__.py mistral/tests/unit/lang/v2/base.py mistral/tests/unit/lang/v2/test_actions.py mistral/tests/unit/lang/v2/test_tasks.py mistral/tests/unit/lang/v2/test_workbook.py mistral/tests/unit/lang/v2/test_workflows.py mistral/tests/unit/mstrlfixtures/__init__.py mistral/tests/unit/mstrlfixtures/hacking.py mistral/tests/unit/mstrlfixtures/policy_fixtures.py mistral/tests/unit/rpc/__init__.py mistral/tests/unit/rpc/kombu/__init__.py mistral/tests/unit/rpc/kombu/base.py mistral/tests/unit/rpc/kombu/fake_kombu.py mistral/tests/unit/rpc/kombu/test_kombu_client.py mistral/tests/unit/rpc/kombu/test_kombu_listener.py mistral/tests/unit/rpc/kombu/test_kombu_server.py mistral/tests/unit/services/__init__.py mistral/tests/unit/services/test_action_manager.py mistral/tests/unit/services/test_action_service.py mistral/tests/unit/services/test_event_engine.py mistral/tests/unit/services/test_expiration_policy.py mistral/tests/unit/services/test_scheduler.py mistral/tests/unit/services/test_trigger_service.py mistral/tests/unit/services/test_workbook_service.py mistral/tests/unit/services/test_workflow_service.py mistral/tests/unit/utils/__init__.py mistral/tests/unit/utils/test_expression_utils.py mistral/tests/unit/utils/test_inspect_utils.py mistral/tests/unit/utils/test_keystone_utils.py mistral/tests/unit/utils/test_utils.py mistral/tests/unit/workflow/__init__.py mistral/tests/unit/workflow/test_direct_workflow.py mistral/tests/unit/workflow/test_reverse_workflow.py mistral/tests/unit/workflow/test_states.py mistral/tests/unit/workflow/test_workflow_base.py mistral/utils/__init__.py mistral/utils/expression_utils.py mistral/utils/filter_utils.py mistral/utils/inspect_utils.py mistral/utils/javascript.py mistral/utils/profiler.py mistral/utils/rest_utils.py mistral/utils/rpc_utils.py mistral/utils/ssh_utils.py mistral/utils/wf_trace.py mistral/utils/openstack/__init__.py mistral/utils/openstack/keystone.py mistral/workflow/__init__.py mistral/workflow/base.py mistral/workflow/commands.py mistral/workflow/data_flow.py mistral/workflow/direct_workflow.py mistral/workflow/lookup_utils.py mistral/workflow/reverse_workflow.py mistral/workflow/states.py mistral/workflow/utils.py playbooks/docker-buildimage/post.yaml playbooks/docker-buildimage/run.yaml playbooks/legacy/mistral-ha/run.yaml playbooks/rally/run.yaml rally-jobs/README.rst rally-jobs/task-mistral.yaml rally-jobs/extra/README.rst rally-jobs/extra/mistral_wb.yaml rally-jobs/extra/nested_wb.yaml rally-jobs/extra/scenarios/complex_wf/complex_wf_params.json rally-jobs/extra/scenarios/complex_wf/complex_wf_wb.yaml rally-jobs/extra/scenarios/join/join_100_wb.yaml rally-jobs/extra/scenarios/join/join_500_wb.yaml rally-jobs/extra/scenarios/with_items/count_100_concurrency_10.json rally-jobs/extra/scenarios/with_items/wb.yaml rally-jobs/plugins/README.rst rally-jobs/plugins/__init__.py releasenotes/notes/.placeholder releasenotes/notes/add-action-region-to-actions-353f6c4b10f76677.yaml releasenotes/notes/add-json-dump-deprecate-json-pp-252c6c495fd2dea1.yaml releasenotes/notes/add_public_event_triggers-ab6249ca85fd5497.yaml releasenotes/notes/alternative-rpc-layer-21ca7f6171c8f628.yaml releasenotes/notes/changing-context-in-delayed-calls-78d8e9a622fe3fe9.yaml releasenotes/notes/changing-isolation-level-to-read-committed-7080833ad284b901.yaml releasenotes/notes/create-and-run-workflows-within-namespaces-e4fba869a889f55f.yaml releasenotes/notes/drop-ceilometerclient-b33330a28906759e.yaml releasenotes/notes/evaluate_env_parameter-14baa54c860da11c.yaml releasenotes/notes/external_openstack_action_mapping_support-5cec5d9d5192feb7.yaml releasenotes/notes/function-called-tasks-available-in-an-expression-17ca83d797ffb3ab.yaml releasenotes/notes/include-output-paramter-in-action-execution-list-c946f1b38dc5a052.yaml releasenotes/notes/ironic-api-newton-9397da8135bb97b4.yaml releasenotes/notes/keycloak-auth-support-74131b49e2071762.yaml releasenotes/notes/magnum-actions-support-b131fa942b937fa5.yaml releasenotes/notes/mistral-aodh-actions-e4c2b7598d2e39ef.yaml releasenotes/notes/mistral-api-server-https-716a6d741893dd23.yaml releasenotes/notes/mistral-customize-authorization-d6b9a965f3056f09.yaml releasenotes/notes/mistral-docker-image-9d6e04ac928289dd.yaml releasenotes/notes/mistral-gnocchi-actions-f26fd76b8a4df40e.yaml releasenotes/notes/mistral-murano-actions-2250f745aaf8536a.yaml releasenotes/notes/mistral-senlin-actions-f3fe359c4e91de01.yaml releasenotes/notes/mistral-tempest-plugin-2f6dcbceb4d27eb0.yaml releasenotes/notes/new-service-actions-support-47279bd649732632.yaml releasenotes/notes/policy-and-doc-in-code-9f1737c474998991.yaml releasenotes/notes/region-name-support-9e4b4ccd963ace88.yaml releasenotes/notes/role-based-resource-access-control-3579714be15d9b0b.yaml releasenotes/notes/support-created-at-yaql-function-execution-6ece8eaf34664c38.yaml releasenotes/notes/support-env-in-adhoc-actions-20c98598893aa19f.yaml releasenotes/notes/support-manage-cron-trigger-by-id-ab544e8068b84967.yaml releasenotes/notes/tacket-actions-support-2b4cee2644313cb3.yaml releasenotes/notes/transition-message-8dc4dd99240bd0f7.yaml releasenotes/notes/update-mistral-docker-image-0c6294fc021545e0.yaml releasenotes/notes/update-retry-policy-fb5e73ce717ed066.yaml releasenotes/notes/use-workflow-uuid-30d5e51c6ac57f1d.yaml releasenotes/notes/validate-ad-hoc-action-api-added-6d7eaaedbe8129a7.yaml releasenotes/notes/workflow-create-instance-YaqlEvaluationException-e22afff26a193c4f.yaml releasenotes/notes/workflow-sharing-746255cda20c48d2.yaml releasenotes/notes/yaml-json-parse-53217627a647dc1d.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder tools/cover.sh tools/generate_mistralclient_help.sh tools/get_action_list.py tools/install_venv.py tools/install_venv_common.py tools/sync_db.py tools/sync_db.sh tools/test-setup.sh tools/update_env_deps tools/with_venv.sh tools/config/check_uptodate.sh tools/config/config-generator.mistral.conf tools/config/policy-generator.mistral.conf tools/docker/DOCKER_README.rst tools/docker/Dockerfile tools/docker/build.sh tools/docker/start_mistral_rabbit_mysql.shmistral-6.0.0/mistral.egg-info/dependency_links.txt0000664000175100017510000000000113245513601022427 0ustar zuulzuul00000000000000 mistral-6.0.0/mistral.egg-info/entry_points.txt0000664000175100017510000000511013245513601021654 0ustar zuulzuul00000000000000[console_scripts] mistral-db-manage = mistral.db.sqlalchemy.migration.cli:main mistral-server = mistral.cmd.launch:main [kombu_driver.executors] blocking = futurist:SynchronousExecutor threading = futurist:ThreadPoolExecutor [mistral.actions] std.async_noop = mistral.actions.std_actions:AsyncNoOpAction std.echo = mistral.actions.std_actions:EchoAction std.email = mistral.actions.std_actions:SendEmailAction std.fail = mistral.actions.std_actions:FailAction std.http = mistral.actions.std_actions:HTTPAction std.javascript = mistral.actions.std_actions:JavaScriptAction std.js = mistral.actions.std_actions:JavaScriptAction std.mistral_http = mistral.actions.std_actions:MistralHTTPAction std.noop = mistral.actions.std_actions:NoOpAction std.sleep = mistral.actions.std_actions:SleepAction std.ssh = mistral.actions.std_actions:SSHAction std.ssh_proxied = mistral.actions.std_actions:SSHProxiedAction std.test_dict = mistral.actions.std_actions:TestDictAction [mistral.auth] keycloak-oidc = mistral.auth.keycloak:KeycloakAuthHandler keystone = mistral.auth.keystone:KeystoneAuthHandler [mistral.executors] local = mistral.executors.default_executor:DefaultExecutor remote = mistral.executors.remote_executor:RemoteExecutor [mistral.expression.evaluators] jinja = mistral.expressions.jinja_expression:InlineJinjaEvaluator yaql = mistral.expressions.yaql_expression:InlineYAQLEvaluator [mistral.expression.functions] env = mistral.utils.expression_utils:env_ execution = mistral.utils.expression_utils:execution_ executions = mistral.utils.expression_utils:executions_ global = mistral.utils.expression_utils:global_ json_dump = mistral.utils.expression_utils:json_dump_ json_parse = mistral.utils.expression_utils:json_parse_ json_pp = mistral.utils.expression_utils:json_pp_ task = mistral.utils.expression_utils:task_ tasks = mistral.utils.expression_utils:tasks_ uuid = mistral.utils.expression_utils:uuid_ yaml_dump = mistral.utils.expression_utils:yaml_dump_ yaml_parse = mistral.utils.expression_utils:yaml_parse_ [mistral.rpc.backends] kombu_client = mistral.rpc.kombu.kombu_client:KombuRPCClient kombu_server = mistral.rpc.kombu.kombu_server:KombuRPCServer oslo_client = mistral.rpc.oslo.oslo_client:OsloRPCClient oslo_server = mistral.rpc.oslo.oslo_server:OsloRPCServer [oslo.config.opts] mistral.config = mistral.config:list_opts [oslo.config.opts.defaults] mistral.config = mistral.config:set_cors_middleware_defaults [oslo.policy.policies] mistral = mistral.policies:list_rules [pygments.lexers] mistral = mistral.ext.pygmentplugin:MistralLexer [wsgi_scripts] mistral-wsgi-api = mistral.api.app:init_wsgi mistral-6.0.0/mistral.egg-info/not-zip-safe0000664000175100017510000000000113245513556020620 0ustar zuulzuul00000000000000 mistral-6.0.0/mistral.egg-info/requires.txt0000664000175100017510000000275413245513601020771 0ustar zuulzuul00000000000000alembic>=0.8.10 aodhclient>=0.9.0 Babel!=2.4.0,>=2.3.4 croniter>=0.3.4 cachetools>=2.0.0 eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 gnocchiclient>=3.3.1 Jinja2!=2.9.0,!=2.9.1,!=2.9.2,!=2.9.3,!=2.9.4,>=2.8 jsonschema<3.0.0,>=2.6.0 keystonemiddleware>=4.17.0 mistral-lib>=0.3.0 networkx<2.0,>=1.10 oslo.concurrency>=3.25.0 oslo.config>=5.1.0 oslo.context>=2.19.2 oslo.db>=4.27.0 oslo.i18n>=3.15.3 oslo.messaging>=5.29.0 oslo.middleware>=3.31.0 oslo.policy>=1.30.0 oslo.utils>=3.33.0 oslo.log>=3.36.0 oslo.serialization!=2.19.1,>=2.18.0 oslo.service!=1.28.1,>=1.24.0 osprofiler>=1.4.0 paramiko>=2.0.0 pbr!=2.1.0,>=2.0.0 pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 python-barbicanclient!=4.5.0,!=4.5.1,>=4.0.0 python-cinderclient>=3.3.0 python-designateclient>=2.7.0 python-glanceclient>=2.8.0 python-glareclient>=0.3.0 python-heatclient>=1.10.0 python-keystoneclient>=3.8.0 python-mistralclient>=3.1.0 python-magnumclient>=2.1.0 python-muranoclient>=0.8.2 python-neutronclient>=6.3.0 python-novaclient>=9.1.0 python-senlinclient>=1.1.0 python-swiftclient>=3.2.0 python-tackerclient>=0.8.0 python-troveclient>=2.2.0 python-ironicclient>=2.2.0 python-ironic-inspector-client>=1.5.0 python-zaqarclient>=1.0.0 PyJWT>=1.0.1 PyYAML>=3.10 requests>=2.14.2 tenacity>=3.2.1 setuptools!=24.0.0,!=34.0.0,!=34.0.1,!=34.0.2,!=34.0.3,!=34.1.0,!=34.1.1,!=34.2.0,!=34.3.0,!=34.3.1,!=34.3.2,!=36.2.0,>=16.0 six>=1.10.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 stevedore>=1.20.0 WSME>=0.8.0 yaql>=1.1.3 tooz>=1.58.0 zake>=0.1.6 mistral-6.0.0/mistral.egg-info/pbr.json0000664000175100017510000000005613245513601020040 0ustar zuulzuul00000000000000{"git_version": "82d5d10", "is_release": true}mistral-6.0.0/mistral.egg-info/top_level.txt0000664000175100017510000000001013245513601021102 0ustar zuulzuul00000000000000mistral mistral-6.0.0/mistral.egg-info/PKG-INFO0000664000175100017510000002637113245513601017467 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: mistral Version: 6.0.0 Summary: Mistral Project Home-page: https://docs.openstack.org/mistral/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: Apache License, Version 2.0 Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/mistral.svg :target: https://governance.openstack.org/tc/reference/tags/index.html Mistral ======= Workflow Service for OpenStack cloud. This project aims to provide a mechanism to define tasks and workflows without writing code, manage and execute them in the cloud environment. Installation ~~~~~~~~~~~~ The following are the steps to install Mistral on debian-based systems. To install Mistral, you have to install the following prerequisites:: $ apt-get install python-dev python-setuptools libffi-dev \ libxslt1-dev libxml2-dev libyaml-dev libssl-dev **Mistral can be used without authentication at all or it can work with OpenStack.** In case of OpenStack, it works **only with Keystone v3**, make sure **Keystone v3** is installed. Install Mistral --------------- First of all, clone the repo and go to the repo directory:: $ git clone https://git.openstack.org/openstack/mistral.git $ cd mistral **Devstack installation** Information about how to install Mistral with devstack can be found `here `_. Configuring Mistral ~~~~~~~~~~~~~~~~~~~ Mistral configuration is needed for getting it work correctly with and without an OpenStack environment. #. Install and configure a database which can be *MySQL* or *PostgreSQL* (**SQLite can't be used in production.**). Here are the steps to connect Mistral to a *MySQL* database. * Make sure you have installed ``mysql-server`` package on your Mistral machine. * Install *MySQL driver* for python:: $ pip install mysql-python or, if you work in virtualenv, run:: $ tox -evenv -- pip install mysql-python NOTE: If you're using Python 3 then you need to install ``mysqlclient`` instead of ``mysql-python``. * Create the database and grant privileges:: $ mysql -u root -p mysql> CREATE DATABASE mistral; mysql> USE mistral mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'localhost' \ IDENTIFIED BY 'MISTRAL_DBPASS'; mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'%' IDENTIFIED BY 'MISTRAL_DBPASS'; #. Generate ``mistral.conf`` file:: $ oslo-config-generator --config-file tools/config/config-generator.mistral.conf \ --output-file etc/mistral.conf.sample #. Copy service configuration files:: $ sudo mkdir /etc/mistral $ sudo chown `whoami` /etc/mistral $ cp etc/event_definitions.yml.sample /etc/mistral/event_definitions.yml $ cp etc/logging.conf.sample /etc/mistral/logging.conf $ cp etc/policy.json /etc/mistral/policy.json $ cp etc/wf_trace_logging.conf.sample /etc/mistral/wf_trace_logging.conf $ cp etc/mistral.conf.sample /etc/mistral/mistral.conf #. Edit file ``/etc/mistral/mistral.conf`` according to your setup. Pay attention to the following sections and options:: [oslo_messaging_rabbit] rabbit_host = rabbit_userid = rabbit_password = [database] # Use the following line if *PostgreSQL* is used # connection = postgresql://:@localhost:5432/mistral connection = mysql://:@localhost:3306/mistral #. If you are not using OpenStack, add the following entry to the ``/etc/mistral/mistral.conf`` file and **skip the following steps**:: [pecan] auth_enable = False #. Provide valid keystone auth properties:: [keystone_authtoken] auth_uri = http://keystone-host:port/v3 auth_url = http://keystone-host:port auth_type = password username = password = user_domain_name = project_name = project_domain_name = #. Register Mistral service and Mistral endpoints on Keystone:: $ MISTRAL_URL="http://[host]:[port]/v2" $ openstack service create --name mistral workflowv2 $ openstack endpoint create mistral public $MISTRAL_URL $ openstack endpoint create mistral internal $MISTRAL_URL $ openstack endpoint create mistral admin $MISTRAL_URL #. Update the ``mistral/actions/openstack/mapping.json`` file which contains all available OpenStack actions, according to the specific client versions of OpenStack projects in your deployment. Please find more detailed information in the ``tools/get_action_list.py`` script. Before the First Run -------------------- After local installation you will find the commands ``mistral-server`` and ``mistral-db-manage`` available in your environment. The ``mistral-db-manage`` command can be used for migrating database schema versions. If Mistral is not installed in system then this script can be found at ``mistral/db/sqlalchemy/migration/cli.py``, it can be executed using Python command line. To update the database schema to the latest revision, type:: $ mistral-db-manage --config-file upgrade head To populate the database with standard actions and workflows, type:: $ mistral-db-manage --config-file populate For more detailed information about ``mistral-db-manage`` script please check file ``mistral/db/sqlalchemy/migration/alembic_migrations/README.md``. Running Mistral API server -------------------------- To run Mistral API server:: $ tox -evenv -- python mistral/cmd/launch.py --server api --config-file Running Mistral Engines ----------------------- To run Mistral Engine:: $ tox -evenv -- python mistral/cmd/launch.py --server engine --config-file Running Mistral Task Executors ------------------------------ To run Mistral Task Executor instance:: $ tox -evenv -- python mistral/cmd/launch.py --server executor --config-file Note that at least one Engine instance and one Executor instance should be running in order for workflow tasks to be processed by Mistral. If you want to run some tasks on specific executor, the *task affinity* feature can be used to send these tasks directly to a specific executor. You can edit the following property in your mistral configuration file for this purpose:: [executor] host = my_favorite_executor After changing this option, you will need to start (restart) the executor. Use the ``target`` property of a task to specify the executor:: ... Workflow YAML ... task1: ... target: my_favorite_executor ... Workflow YAML ... Running Multiple Mistral Servers Under the Same Process ------------------------------------------------------- To run more than one server (API, Engine, or Task Executor) on the same process:: $ tox -evenv -- python mistral/cmd/launch.py --server api,engine --config-file The value for the ``--server`` option can be a comma-delimited list. The valid options are ``all`` (which is the default if not specified) or any combination of ``api``, ``engine``, and ``executor``. It's important to note that the ``fake`` transport for the ``rpc_backend`` defined in the configuration file should only be used if ``all`` Mistral servers are launched on the same process. Otherwise, messages do not get delivered because the ``fake`` transport is using an in-process queue. Project Goals 2018 ------------------ #. **Complete Mistral documentation**. Mistral documentation should be more usable. It requires focused work to make it well structured, eliminate gaps in API/Mistral Workflow Language specifications, add more examples and tutorials. *Definition of done*: All capabilities are covered, all documentation topics are written using the same style and structure principles. The obvious sub-goal of this goal is to establish these principles. #. **Finish Mistral multi-node mode**. Mistral needs to be proven to work reliably in multi-node mode. In order to achieve it we need to make a number of engine, executor and RPC changes and configure a CI gate to run stress tests on multi-node Mistral. *Definition of done*: CI gate supports MySQL, all critically important functionality (join, with-items, parallel workflows, sequential workflows) is covered by tests. Project Resources ----------------- * `Mistral Official Documentation `_ * Project status, bugs, and blueprints are tracked on `Launchpad `_ * Additional resources are linked from the project `Wiki `_ page * Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 Platform: UNKNOWN Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux mistral-6.0.0/tox.ini0000666000175100017510000000575113245513262014544 0ustar zuulzuul00000000000000[tox] envlist = py35,py27,pep8 minversion = 1.6 skipsdist = True [testenv] usedevelop = True install_command = pip install {opts} {packages} setenv = VIRTUAL_ENV={envdir} PYTHONDONTWRITEBYTECODE = 1 PYTHONWARNINGS=default::DeprecationWarning passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY deps = -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt commands = rm -f .testrepository/times.dbm find . -type f -name "*.pyc" -delete python setup.py testr --slowest --testr-args='{posargs}' whitelist_externals = rm find [testenv:unit-postgresql] setenv = VIRTUAL_ENV={envdir} passenv = ZUUL_PROJECT commands = ./run_tests.sh -N --db-type postgresql [testenv:unit-mysql] setenv = VIRTUAL_ENV={envdir} passenv = ZUUL_PROJECT commands = ./run_tests.sh -N --db-type mysql [testenv:pep8] basepython = python2.7 commands = doc8 doc/source flake8 {posargs} . {toxinidir}/tools/get_action_list.py {toxinidir}/tools/sync_db.py [testenv:cover] # Also do not run test_coverage_ext tests while gathering coverage as those # tests conflict with coverage. setenv = VIRTUAL_ENV={envdir} commands = {toxinidir}/tools/cover.sh {posargs} [testenv:genconfig] commands = oslo-config-generator --config-file tools/config/config-generator.mistral.conf \ --output-file etc/mistral.conf.sample [testenv:genpolicy] commands = oslopolicy-sample-generator --config-file tools/config/policy-generator.mistral.conf \ --output-file etc/policy.yaml.sample #set PYTHONHASHSEED=0 to prevent wsmeext.sphinxext from randomly failing. [testenv:venv] basepython = python2.7 setenv = PYTHONHASHSEED=0 commands = {posargs} #set PYTHONHASHSEED=0 to prevent wsmeext.sphinxext from randomly failing. [testenv:docs] basepython = python2.7 setenv = PYTHONHASHSEED=0 commands = rm -rf doc/build sphinx-build -b html doc/source doc/build/html [testenv:releasenotes] commands = rm -rf releasenotes/build sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [testenv:api-ref] # This environment is called from CI scripts to test and publish # the API Ref to developer.openstack.org. commands = rm -rf api-ref/build sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html whitelist_externals = rm #Skip PEP257 violation. [flake8] ignore = D100,D101,D102,D103,D104,D105,D200,D203,D202,D204,D205,D208,D400,D401 show-source = true builtins = _ # [H106] Don't put vim configuration in source files. # [H203] Use assertIs(Not)None to check for None. # [H904] Delay string interpolations at logging calls. enable-extensions = H106,H203,H904 exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools,scripts [doc8] extensions = .rst, .yaml # Maximal line length should be 80. max-line-length = 80 [hacking] local-check-factory = mistral.hacking.checks.factory import_exceptions = mistral._i18n mistral-6.0.0/tools/0000775000175100017510000000000013245513605014360 5ustar zuulzuul00000000000000mistral-6.0.0/tools/test-setup.sh0000777000175100017510000000350313245513262017036 0ustar zuulzuul00000000000000#!/bin/bash -xe # This script will be run by OpenStack CI before unit tests are run, # it sets up the test system as needed. # Developers should setup their test systems in a similar way. # This setup needs to be run as a user that can run sudo. # The root password for the MySQL database; pass it in via # MYSQL_ROOT_PW. DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave} # This user and its password are used by the tests, if you change it, # your tests might fail. DB_USER=openstack_citest DB_PW=openstack_citest sudo -H mysqladmin -u root password $DB_ROOT_PW # It's best practice to remove anonymous users from the database. If # a anonymous user exists, then it matches first for connections and # other connections from that host will not work. sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e " DELETE FROM mysql.user WHERE User=''; FLUSH PRIVILEGES; GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;" # Now create our database. mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e " SET default_storage_engine=MYISAM; DROP DATABASE IF EXISTS openstack_citest; CREATE DATABASE openstack_citest CHARACTER SET utf8;" # Same for PostgreSQL # Setup user root_roles=$(sudo -H -u postgres psql -t -c " SELECT 'HERE' from pg_roles where rolname='$DB_USER'") if [[ ${root_roles} == *HERE ]];then sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" else sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" fi # Store password for tests cat << EOF > $HOME/.pgpass *:*:*:$DB_USER:$DB_PW EOF chmod 0600 $HOME/.pgpass # Now create our database psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest" createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest mistral-6.0.0/tools/sync_db.sh0000777000175100017510000000006513245513262016342 0ustar zuulzuul00000000000000#!/bin/sh tox -evenv -- python tools/sync_db.py "$@"mistral-6.0.0/tools/get_action_list.py0000666000175100017510000002357413245513262020115 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import collections import inspect import json import os from aodhclient.v2 import base as aodh_base from aodhclient.v2 import client as aodhclient from barbicanclient import base as barbican_base from barbicanclient import client as barbicanclient from cinderclient.apiclient import base as cinder_base from cinderclient.v2 import client as cinderclient from designateclient import client as designateclient from glanceclient.v2 import client as glanceclient from glareclient.v1 import client as glareclient from gnocchiclient.v1 import base as gnocchi_base from gnocchiclient.v1 import client as gnocchiclient from heatclient.common import base as heat_base from heatclient.v1 import client as heatclient from ironicclient.common import base as ironic_base from ironicclient.v1 import client as ironicclient from keystoneclient import base as keystone_base from keystoneclient.v3 import client as keystoneclient from magnumclient.common import base as magnum_base from magnumclient.v1 import client as magnumclient from mistralclient.api import base as mistral_base from mistralclient.api.v2 import client as mistralclient from muranoclient.common import base as murano_base from muranoclient.v1 import client as muranoclient from novaclient import base as nova_base from novaclient import client as novaclient from troveclient import base as trove_base from troveclient.v1 import client as troveclient # TODO(nmakhotkin): Find a rational way to do it for neutron. # TODO(nmakhotkin): Implement recursive way of searching for managers # TODO(nmakhotkin): (e.g. keystone). # TODO(dprince): Need to update ironic_inspector_client before we can # plug it in cleanly here. # TODO(dprince): Swiftclient doesn't currently support discovery # like we do in this class. # TODO(therve): Zaqarclient doesn't currently support discovery # like we do in this class. # TODO(sa709c): Tackerclient doesn't currently support discovery # like we do in this class. """It is simple CLI tool which allows to see and update mapping.json file if needed. mapping.json contains all allowing OpenStack actions sorted by service name. Usage example: python tools/get_action_list.py nova The result will be simple JSON containing action name as a key and method path as a value. For updating mapping.json it is need to copy all keys and values of the result to corresponding section of mapping.json: ...mapping.json... "nova": { }, ...mapping.json... Note: in case of Keystone service, correct OS_AUTH_URL v3 and the rest auth info must be provided. It can be provided either via environment variables or CLI arguments. See --help for details. """ BASE_HEAT_MANAGER = heat_base.HookableMixin BASE_NOVA_MANAGER = nova_base.HookableMixin BASE_KEYSTONE_MANAGER = keystone_base.Manager BASE_CINDER_MANAGER = cinder_base.HookableMixin BASE_MISTRAL_MANAGER = mistral_base.ResourceManager BASE_TROVE_MANAGER = trove_base.Manager BASE_IRONIC_MANAGER = ironic_base.Manager BASE_BARBICAN_MANAGER = barbican_base.BaseEntityManager BASE_MAGNUM_MANAGER = magnum_base.Manager BASE_MURANO_MANAGER = murano_base.Manager BASE_AODH_MANAGER = aodh_base.Manager BASE_GNOCCHI_MANAGER = gnocchi_base.Manager def get_parser(): parser = argparse.ArgumentParser( description='Gets All needed methods of OpenStack clients.', usage="python get_action_list.py " ) parser.add_argument( 'service', choices=CLIENTS.keys(), help='Service name which methods need to be found.' ) parser.add_argument( '--os-username', dest='username', default=os.environ.get('OS_USERNAME', 'admin'), help='Authentication username (Env: OS_USERNAME)' ) parser.add_argument( '--os-password', dest='password', default=os.environ.get('OS_PASSWORD', 'openstack'), help='Authentication password (Env: OS_PASSWORD)' ) parser.add_argument( '--os-tenant-name', dest='tenant_name', default=os.environ.get('OS_TENANT_NAME', 'Default'), help='Authentication tenant name (Env: OS_TENANT_NAME)' ) parser.add_argument( '--os-auth-url', dest='auth_url', default=os.environ.get('OS_AUTH_URL'), help='Authentication URL (Env: OS_AUTH_URL)' ) return parser GLANCE_NAMESPACE_LIST = [ 'image_members', 'image_tags', 'images', 'schemas', 'tasks', 'metadefs_resource_type', 'metadefs_property', 'metadefs_object', 'metadefs_tag', 'metadefs_namespace', 'versions' ] DESIGNATE_NAMESPACE_LIST = [ 'diagnostics', 'domains', 'quotas', 'records', 'reports', 'servers', 'sync', 'touch' ] GLARE_NAMESPACE_LIST = ['artifacts', 'versions'] def get_nova_client(**kwargs): return novaclient.Client(2) def get_keystone_client(**kwargs): return keystoneclient.Client(**kwargs) def get_glance_client(**kwargs): return glanceclient.Client(kwargs.get('auth_url')) def get_heat_client(**kwargs): return heatclient.Client('') def get_cinder_client(**kwargs): return cinderclient.Client() def get_mistral_client(**kwargs): return mistralclient.Client() def get_trove_client(**kwargs): return troveclient.Client('username', 'password') def get_ironic_client(**kwargs): return ironicclient.Client("http://127.0.0.1:6385/") def get_barbican_client(**kwargs): return barbicanclient.Client( project_id="1", endpoint="http://127.0.0.1:9311" ) def get_designate_client(**kwargs): return designateclient.Client('1') def get_magnum_client(**kwargs): return magnumclient.Client() def get_murano_client(**kwargs): return muranoclient.Client('') def get_aodh_client(**kwargs): return aodhclient.Client('') def get_gnocchi_client(**kwargs): return gnocchiclient.Client() def get_glare_client(**kwargs): return glareclient.Client('') CLIENTS = { 'nova': get_nova_client, 'heat': get_heat_client, 'cinder': get_cinder_client, 'keystone': get_keystone_client, 'glance': get_glance_client, 'trove': get_trove_client, 'ironic': get_ironic_client, 'barbican': get_barbican_client, 'mistral': get_mistral_client, 'designate': get_designate_client, 'magnum': get_magnum_client, 'murano': get_murano_client, 'aodh': get_aodh_client, 'gnocchi': get_gnocchi_client, 'glare': get_glare_client, # 'neutron': get_nova_client # 'baremetal_introspection': ... # 'swift': ... # 'zaqar': ... } BASE_MANAGERS = { 'nova': BASE_NOVA_MANAGER, 'heat': BASE_HEAT_MANAGER, 'cinder': BASE_CINDER_MANAGER, 'keystone': BASE_KEYSTONE_MANAGER, 'glance': None, 'trove': BASE_TROVE_MANAGER, 'ironic': BASE_IRONIC_MANAGER, 'barbican': BASE_BARBICAN_MANAGER, 'mistral': BASE_MISTRAL_MANAGER, 'designate': None, 'magnum': BASE_MAGNUM_MANAGER, 'murano': BASE_MURANO_MANAGER, 'aodh': BASE_AODH_MANAGER, 'gnocchi': BASE_GNOCCHI_MANAGER, 'glare': None, # 'neutron': BASE_NOVA_MANAGER # 'baremetal_introspection': ... # 'swift': ... # 'zaqar': ... } NAMESPACES = { 'glance': GLANCE_NAMESPACE_LIST, 'designate': DESIGNATE_NAMESPACE_LIST, 'glare': GLARE_NAMESPACE_LIST } ALLOWED_ATTRS = ['service_catalog', 'catalog'] FORBIDDEN_METHODS = [ 'add_hook', 'alternate_service_type', 'completion_cache', 'run_hooks', 'write_to_completion_cache', 'model', 'build_key_only_query', 'build_url', 'head', 'put', 'unvalidated_model' ] def get_public_attrs(obj): all_attrs = dir(obj) return [a for a in all_attrs if not a.startswith('_')] def get_public_methods(attr, client): hierarchy_list = attr.split('.') attribute = client for attr in hierarchy_list: attribute = getattr(attribute, attr) all_attributes_list = get_public_attrs(attribute) methods = [] for a in all_attributes_list: allowed = a in ALLOWED_ATTRS forbidden = a in FORBIDDEN_METHODS if (not forbidden and (allowed or inspect.ismethod(getattr(attribute, a)))): methods.append(a) return methods def get_manager_list(service_name, client): base_manager = BASE_MANAGERS[service_name] if not base_manager: return NAMESPACES[service_name] public_attrs = get_public_attrs(client) manager_list = [] for attr in public_attrs: if (isinstance(getattr(client, attr), base_manager) or attr in ALLOWED_ATTRS): manager_list.append(attr) return manager_list def get_mapping_for_service(service, client): mapping = collections.OrderedDict() for man in get_manager_list(service, client): public_methods = get_public_methods(man, client) for method in public_methods: key = "%s_%s" % (man, method) value = "%s.%s" % (man, method) mapping[key] = value return mapping def print_mapping(mapping): print(json.dumps(mapping, indent=8, separators=(',', ': '))) if __name__ == "__main__": args = get_parser().parse_args() auth_info = { 'username': args.username, 'tenant_name': args.tenant_name, 'password': args.password, 'auth_url': args.auth_url } service = args.service client = CLIENTS.get(service)(**auth_info) print("Find methods for service: %s..." % service) print_mapping(get_mapping_for_service(service, client)) mistral-6.0.0/tools/sync_db.py0000666000175100017510000000356013245513262016360 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys import keystonemiddleware.opts as keystonemw_opts from oslo_config import cfg from oslo_log import log as logging from mistral import config from mistral.db.v2 import api as db_api from mistral.services import action_manager from mistral.services import workflows CONF = cfg.CONF LOG = logging.getLogger(__name__) def main(): # NOTE(jaosorior): This is needed in order for db-sync to also register the # keystonemiddleware options. Those options are used by clients that need a # keystone session in order to be able to register their actions. # This can be removed when mistral moves out of using keystonemiddleware in # favor of keystoneauth1. for group, opts in keystonemw_opts.list_auth_token_opts(): CONF.register_opts(opts, group=group) CONF.register_cli_opt(config.os_actions_mapping_path) logging.register_options(CONF) config.parse_args() if len(CONF.config_file) == 0: print("Usage: sync_db --config-file ") return exit(1) logging.setup(CONF, 'Mistral') LOG.info("Starting db_sync") LOG.debug("Setting up db") db_api.setup_db() LOG.debug("populating db") action_manager.sync_db() workflows.sync_db() if __name__ == '__main__': sys.exit(main()) mistral-6.0.0/tools/with_venv.sh0000777000175100017510000000033213245513262016727 0ustar zuulzuul00000000000000#!/bin/bash tools_path=${tools_path:-$(dirname $0)} venv_path=${venv_path:-${tools_path}} venv_dir=${venv_name:-/../.venv} TOOLS=${tools_path} VENV=${venv:-${venv_path}/${venv_dir}} source ${VENV}/bin/activate && "$@" mistral-6.0.0/tools/docker/0000775000175100017510000000000013245513605015627 5ustar zuulzuul00000000000000mistral-6.0.0/tools/docker/DOCKER_README.rst0000666000175100017510000000510513245513262020347 0ustar zuulzuul00000000000000Using Mistral with docker ========================= In order to minimize the work needed to run the current Mistral code, or be able to spin up independent or networked Mistral instances in seconds, Docker containers are a very good option. This guide describes the process to launch an all-in-one Mistral container. Docker installation ------------------- In order to install the latest docker engine, run:: curl -fsSL https://get.docker.com/ | sh If you are behind a proxy, additional configuration may be needed to be able to execute further steps in the setup process. For detailed information on this process, check out `the official guide at `_. Build the Mistral image ----------------------- The `build.sh` script takes care of creating the `mistral-all` image locally. This is image is configured to use RabbitMQ for transport and MySQL as database backend. It is possible to run Mistral with Sqlite as database backend but it is very unreliable, thus, MySQL was selected as the default database backend for this image. Running Mistral with MySQL -------------------------- The `start_mistral_rabbit_mysql.sh` script sets up a rabbitmq container, a mysql container and a mistral container to work together. The script can be invoked with:: start_mistral_rabbit_mysql.sh [single|multi] `single` mode (this is the default) will create - rabbitmq container, - the mysql container, - and the mistral container that runs all Mistral services. `multi` mode will create - rabbitmq, - mysql, - mistral-api, - one mistral-engine, - two mistral-executors Check out the script for more detail and examples for different setup options. Using Mistral ------------- Depending on the mode, you may need to use the `mistral` or the `mistral-api` container. With the `multi` option execute commands inside the container:: docker exec -it mistral-api bash E.g. to list workflows, issue:: mistral workflow-list The script also configures the containers so that the Mistral API will be accessible from the host machine on the default port 8989. So it is also possible to install the `mistral-pythonclient` to the host machine and execute commands there. Configuring Mistral ------------------- The Mistral configuration is stored in the Docker image. The changes to the configuration should be synchronized between all deployed containers to ensure consistent behavior. This can be achieved by mounting the configuration as a volume:: export EXTRA_OPTS='-v :/etc/mistral/mistral.conf:ro' start_mistral_rabbit_mysql.sh multi mistral-6.0.0/tools/docker/start_mistral_rabbit_mysql.sh0000777000175100017510000000556613245513262023643 0ustar zuulzuul00000000000000#! /bin/bash -e if [ "${1}" == "--help" ]; then echo ' Synopsis: start_mistral_rabbit_mysql.sh [single|multi|clean] Environment variables: EXTRA_OPTS : extra parameters to be used for all mistral containers (e.g. -v) MYSQL_ROOT_PASSWORD : password for the MySQL server SCRATCH : remove all existing containers (RabbitMQ and MySQL are not removed by default) ' exit 0 fi MODE=${1:-single} export MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-strangehat} MISTRAL_CONTAINERS=$(docker ps -a --format '{{.ID}} {{.Names}}' | grep mistral || true) if [ -z "$SCRATCH" -a "$MODE" != 'clean' ]; then MISTRAL_CONTAINERS=$(echo "$MISTRAL_CONTAINERS" | grep -v rabbitmq | grep -v mysql | cat) fi if [ -n "$MISTRAL_CONTAINERS" ]; then echo "Removing existing containers: $MISTRAL_CONTAINERS" KILLED_CONTAINERS=$(echo "$MISTRAL_CONTAINERS" | awk '{print $1}') docker kill -s 9 $KILLED_CONTAINERS docker rm $KILLED_CONTAINERS fi if [ "$MODE" == 'clean' ]; then echo "Clean complete" exit 0 fi if [ -z "$(docker ps -aq --filter "Name=mistral-mysql")" ]; then docker create --name mistral-mysql -e MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} mysql fi docker start mistral-mysql if [ -z "$(docker ps -aq --filter "Name=mistral-rabbitmq")" ]; then docker create --name mistral-rabbitmq rabbitmq fi docker start mistral-rabbitmq while true; do sleep 5 docker exec mistral-mysql \ mysql -u root -pstrangehat \ -e "CREATE DATABASE IF NOT EXISTS mistral; USE mistral; GRANT ALL ON mistral.* TO 'root'@'%' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}'" \ && break || true done sleep 10 docker run -dit --link mistral-mysql:mysql --name mistral-db-setup mistral-all cat docker exec mistral-db-setup python /opt/stack/mistral/tools/sync_db.py docker kill -s 9 mistral-db-setup docker rm mistral-db-setup function run_mistral() { NAME=${1:-mistral} shift || true LINKS='--link mistral-mysql:mysql --link mistral-rabbitmq:rabbitmq' docker run \ -d \ --name $NAME \ $LINKS \ ${EXTRA_OPTS} \ ${OPTS} \ mistral-all "$@" } unset OPTS case "$MODE" in single) # Single node setup # The CMD of the mistral-all image runs the `mistral-server --server all` command. OPTS="-p 8989:8989" run_mistral echo " Enter the container: docker exec -it mistral bash List workflows docker exec mistral mistral workflow-list " ;; multi) # Multinode setup OPTS="-p 8989:8989" run_mistral "mistral-api" mistral-server --server api run_mistral "mistral-engine" mistral-server --server engine run_mistral "mistral-executor-1" mistral-server --server executor run_mistral "mistral-executor-2" mistral-server --server executor echo " List workflows docker exec mistral-api mistral workflow-list " ;; esac mistral-6.0.0/tools/docker/build.sh0000777000175100017510000000023513245513262017266 0ustar zuulzuul00000000000000#!/bin/bash -xe SCRIPT_DIR="$(dirname "$(readlink -e "${BASH_SOURCE[0]}")")" ( cd "$SCRIPT_DIR" docker build -t mistral-all -f Dockerfile ../.. ) mistral-6.0.0/tools/docker/Dockerfile0000666000175100017510000000272613245513262017631 0ustar zuulzuul00000000000000FROM krallin/ubuntu-tini:16.04 MAINTAINER Andras Kovi RUN export DEBIAN_FRONTEND=noninteractive && \ apt-get -qq update && \ apt-get install -y \ curl \ git \ libffi-dev \ libssl-dev \ libxml2-dev \ libxslt1-dev \ libyaml-dev \ mc \ python-dev \ python-pip \ python-setuptools \ swig \ cmake \ crudini \ libuv1 \ libuv1-dev RUN pip install -v v8eval && python -c 'import v8eval' RUN apt-get install -y libmysqlclient-dev && \ pip install mysql-python RUN pip install -U tox python-mistralclient pip COPY . /opt/stack/mistral RUN curl -o /tmp/upper-constraints.txt http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt && \ sed -i '/^mistral.*/d' /tmp/upper-constraints.txt &&\ pip install -e /opt/stack/mistral RUN mkdir -p /etc/mistral RUN oslo-config-generator \ --config-file /opt/stack/mistral/tools/config/config-generator.mistral.conf \ --output-file /etc/mistral/mistral.conf RUN INI_SET="crudini --set /etc/mistral/mistral.conf" && \ $INI_SET DEFAULT js_implementation v8eval && \ $INI_SET DEFAULT transport_url rabbit://guest:guest@rabbitmq:5672/ && \ $INI_SET database connection mysql://root:strangehat@mysql:3306/mistral && \ $INI_SET oslo_policy policy_file /opt/stack/mistral/etc/policy.json && \ $INI_SET pecan auth_enable false EXPOSE 8989 CMD mistral-server --server all mistral-6.0.0/tools/update_env_deps0000777000175100017510000000102513245513262017452 0ustar zuulzuul00000000000000TOX_ENVLIST=`grep envlist tox.ini | cut -d '=' -f 2 | tr ',' ' '` TESTENVS=`grep testenv tox.ini | awk -F ':' '{print $2}' | tr '[]' ' '` UNFILTERED_ENVLIST=`echo "$TOX_ENVLIST $TESTENVS"` ENVLIST=$( awk 'BEGIN{RS=ORS=" "}!a[$0]++' <<<$UNFILTERED_ENVLIST ); for env in $ENVLIST do ENV_PATH=.tox/$env PIP_PATH=$ENV_PATH/bin/pip echo -e "\nUpdate environment ${env}...\n" if [ ! -d $ENV_PATH -o ! -f $PIP_PATH ] then tox --notest -e$env else $PIP_PATH install -r requirements.txt -r test-requirements.txt fi done mistral-6.0.0/tools/generate_mistralclient_help.sh0000777000175100017510000000151313245513262022454 0ustar zuulzuul00000000000000if [ -z "$1" ]; then echo echo "Usage: $(basename $0) " echo exit fi cmd_list=$(mistral --help | sed -e '1,/Commands for API/d' | cut -d " " -f 3 | grep -vwE "(help|complete|bash-completion)") file=$1 > $file for cmd in $cmd_list do echo "Processing help for command $cmd..." echo "**$cmd**:" >> $file read -d '' helpstr << EOF $(mistral help $cmd | sed -e '/output formatters/,$d' | grep -vwE "(--help)") EOF usage=$(echo "$helpstr" | sed -e '/^$/,$d' | sed 's/^/ /') helpstr=$(echo "$helpstr" | sed -e '1,/^$/d') echo -e "::\n" >> $file echo "$usage" >> $file echo >> $file echo "$helpstr" >> $file echo >> $file done # Delete empty 'optional arguments:'. sed -i '/optional arguments:/ { N /^optional arguments:\n$/d }' $file # Delete extra empty lines. sed -i '/^$/ { N /^\n$/d }' $file mistral-6.0.0/tools/config/0000775000175100017510000000000013245513605015625 5ustar zuulzuul00000000000000mistral-6.0.0/tools/config/check_uptodate.sh0000777000175100017510000000132013245513262021143 0ustar zuulzuul00000000000000#!/usr/bin/env bash PROJECT_NAME=${PROJECT_NAME:-mistral} CFGFILE_NAME=${PROJECT_NAME}.conf.sample if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME} elif [ -e etc/${CFGFILE_NAME} ]; then CFGFILE=etc/${CFGFILE_NAME} else echo "${0##*/}: can not find config file" exit 1 fi TEMPDIR=$(mktemp -d /tmp/${PROJECT_NAME}.XXXXXX) trap "rm -rf $TEMPDIR" EXIT oslo-config-generator --config-file tools/config/config-generator.mistral.conf --output-file ${TEMPDIR}/${CFGFILE_NAME} if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE} then echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date." echo "${0##*/}: Please run tox -egenconfig." exit 1 fi mistral-6.0.0/tools/config/policy-generator.mistral.conf0000666000175100017510000000003513245513262023430 0ustar zuulzuul00000000000000[DEFAULT] namespace = mistralmistral-6.0.0/tools/config/config-generator.mistral.conf0000666000175100017510000000041213245513262023375 0ustar zuulzuul00000000000000[DEFAULT] namespace = mistral.config namespace = oslo.db namespace = oslo.messaging namespace = oslo.middleware.cors namespace = keystonemiddleware.auth_token namespace = periodic.config namespace = oslo.log namespace = oslo.policy namespace = oslo.service.sslutils mistral-6.0.0/tools/cover.sh0000777000175100017510000000465613245513262016051 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ALLOWED_EXTRA_MISSING=4 show_diff () { head -1 $1 diff -U 0 $1 $2 | sed 1,2d } # Stash uncommitted changes, checkout previous commit and save coverage report uncommitted=$(git status --porcelain | grep -v "^??") [[ -n $uncommitted ]] && git stash > /dev/null git checkout HEAD^ baseline_report=$(mktemp -t mistral_coverageXXXXXXX) find . -type f -name "*.pyc" -delete && python setup.py testr --coverage --testr-args="$*" coverage report -m > $baseline_report baseline_missing=$(awk 'END { print $3 }' $baseline_report) previous_sha=$(git rev-parse HEAD); # Checkout back and unstash uncommitted changes (if any) git checkout - [[ -n $uncommitted ]] && git stash pop > /dev/null # Erase previously collected coverage data. coverage erase; # Generate and save coverage report current_report=$(mktemp -t mistral_coverageXXXXXXX) find . -type f -name "*.pyc" -delete && python setup.py testr --coverage --testr-args="$*" coverage report -m > $current_report current_missing=$(awk 'END { print $3 }' $current_report) # Show coverage details allowed_missing=$((baseline_missing+ALLOWED_EXTRA_MISSING)) echo "Allowed to introduce missing lines : ${ALLOWED_EXTRA_MISSING}" echo "Copmared against ${previous_sha}"; echo "Missing lines in previous commit : ${baseline_missing}" echo "Missing lines in proposed change : ${current_missing}" if [ $allowed_missing -gt $current_missing ]; then if [ $baseline_missing -lt $current_missing ]; then show_diff $baseline_report $current_report echo "I believe you can cover all your code with 100% coverage!" else echo "Thank you! You are awesome! Keep writing unit tests! :)" fi exit_code=0 else show_diff $baseline_report $current_report echo "Please write more unit tests, we should keep our test coverage :( " exit_code=1 fi rm $baseline_report $current_report exit $exit_code mistral-6.0.0/tools/install_venv_common.py0000666000175100017510000001350713245513262021015 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Provides methods needed by installation script for OpenStack development virtual environments. Since this script is used to bootstrap a virtualenv from the system's Python environment, it should be kept strictly compatible with Python 2.6. Synced in from openstack-common """ from __future__ import print_function import optparse import os import subprocess import sys class InstallVenv(object): def __init__(self, root, venv, requirements, test_requirements, py_version, project): self.root = root self.venv = venv self.requirements = requirements self.test_requirements = test_requirements self.py_version = py_version self.project = project def die(self, message, *args): print(message % args, file=sys.stderr) sys.exit(1) def check_python_version(self): if sys.version_info < (2, 6): self.die("Need Python Version >= 2.6") def run_command_with_code(self, cmd, redirect_output=True, check_exit_code=True): """Runs a command in an out-of-process shell. Returns the output of that command. Working directory is self.root. """ if redirect_output: stdout = subprocess.PIPE else: stdout = None proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout) output = proc.communicate()[0] if check_exit_code and proc.returncode != 0: self.die('Command "%s" failed.\n%s', ' '.join(cmd), output) return (output, proc.returncode) def run_command(self, cmd, redirect_output=True, check_exit_code=True): return self.run_command_with_code(cmd, redirect_output, check_exit_code)[0] def get_distro(self): if (os.path.exists('/etc/fedora-release') or os.path.exists('/etc/redhat-release')): return Fedora( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) else: return Distro( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) def check_dependencies(self): self.get_distro().install_virtualenv() def create_virtualenv(self, no_site_packages=True): """Creates the virtual environment and installs PIP. Creates the virtual environment and installs PIP only into the virtual environment. """ if not os.path.isdir(self.venv): print('Creating venv...', end=' ') if no_site_packages: self.run_command(['virtualenv', '-q', '--no-site-packages', self.venv]) else: self.run_command(['virtualenv', '-q', self.venv]) print('done.') else: print("venv already exists...") pass def pip_install(self, *args): self.run_command(['tools/with_venv.sh', 'pip', 'install', '--upgrade'] + list(args), redirect_output=False) def install_dependencies(self): print('Installing dependencies with pip (this can take a while)...') # First things first, make sure our venv has the latest pip and # setuptools and pbr self.pip_install('pip>=1.4') self.pip_install('setuptools') self.pip_install('pbr') self.pip_install('-r', self.requirements, '-r', self.test_requirements) def parse_args(self, argv): """Parses command-line arguments.""" parser = optparse.OptionParser() parser.add_option('-n', '--no-site-packages', action='store_true', help="Do not inherit packages from global Python " "install") return parser.parse_args(argv[1:])[0] class Distro(InstallVenv): def check_cmd(self, cmd): return bool(self.run_command(['which', cmd], check_exit_code=False).strip()) def install_virtualenv(self): if self.check_cmd('virtualenv'): return if self.check_cmd('easy_install'): print('Installing virtualenv via easy_install...', end=' ') if self.run_command(['easy_install', 'virtualenv']): print('Succeeded') return else: print('Failed') self.die('ERROR: virtualenv not found.\n\n%s development' ' requires virtualenv, please install it using your' ' favorite package management tool' % self.project) class Fedora(Distro): """This covers all Fedora-based distributions. Includes: Fedora, RHEL, CentOS, Scientific Linux """ def check_pkg(self, pkg): return self.run_command_with_code(['rpm', '-q', pkg], check_exit_code=False)[1] == 0 def install_virtualenv(self): if self.check_cmd('virtualenv'): return if not self.check_pkg('python-virtualenv'): self.die("Please install 'python-virtualenv'.") super(Fedora, self).install_virtualenv() mistral-6.0.0/tools/install_venv.py0000666000175100017510000000456313245513262017447 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Copyright 2010 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import print_function import os import sys import install_venv_common as install_venv def print_help(venv, root): help = """ Mistral development environment setup is complete. Mistral development uses virtualenv to track and manage Python dependencies while in development and testing. To activate the Mistral virtualenv for the extent of your current shell session you can run: $ . %s/bin/activate Or, if you prefer, you can run commands in the virtualenv on a case by case basis by running: $ %s/tools/with_venv.sh Also, make test will automatically use the virtualenv. """ print(help % (venv, root)) def main(argv): root = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) if os.environ.get('tools_path'): root = os.environ['tools_path'] venv = os.path.join(root, '.venv') if os.environ.get('venv'): venv = os.environ['venv'] pip_requires = os.path.join(root, 'requirements.txt') test_requires = os.path.join(root, 'test-requirements.txt') py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1]) project = 'Mistral' install = install_venv.InstallVenv(root, venv, pip_requires, test_requires, py_version, project) options = install.parse_args(argv) install.check_python_version() install.check_dependencies() install.create_virtualenv(no_site_packages=options.no_site_packages) install.install_dependencies() print_help(venv, root) if __name__ == '__main__': sys.exit(main(sys.argv)) mistral-6.0.0/AUTHORS0000664000175100017510000001675213245513601014277 0ustar zuulzuul00000000000000Abhishek Chanda Adriano Petrich Alexander Kuznetsov Alfredo Moralejo Anastasia Kuznetsova Andras Kovi Andreas Jaeger Angus Salkeld Ankita Wagh Antoine Musso Bertrand Lallau Bertrand Lallau Bhaskar Duvvuri Bob HADDLETON Bob Haddleton Bob.Haddleton Boris Bobrov Boris Pavlovic Brad P. Crochet Béla Vancsics Cao Xuan Hoang Chandan Kumar Chaozhe.Chen Chen Eilat Christian Berendt Claudiu Belu Dai Dang Van Dan Prince Dao Cong Tien Daryl Mowrer David C Kennedy Dawid Deja Derek Higgins Dirk Mueller Dmitri Zimine Dmitry Tantsur Doug Hellmann Dougal Matthews Ed Cranford Emilien Macchi Endre János Kovács Eyal Fei Long Wang Flavio Percoco Gal Margalit Guy Paz Hangdong Zhang Hardik Parekh Hieu LE Honza Pokorny Istvan Imre Istvan Imre James E. Blair Jeff Peeler Jeffrey Guan Jeffrey Zhang Jeremy Liu Jeremy Stanley Ji zhaoxuan Ji-Wei Jiri Tomasek Juan Antonio Osorio Robles Kaustuv Royburman Kevin Pouget Kien Nguyen Kirill Izotov Kupai József Lakshmi Kannan Lakshmi Kannan Limor Limor Stotland Lingxian Kong LingxianKong LingxianKong LiuNanke Lucky samadhiya Luong Anh Tuan Manas Kelshikar MaoyangLiu Marcos Fermin Lobo Michael Krotscheck Michal Gershenzon Michal Gershenzon Michal Gershenzon Mike Fedosin Miles Gould Monty Taylor Morgan Jones Moshe Elisha Márton Csuha Nguyen Hung Phuong Nguyen Van Trung Nick Maludy Nikolay Mahotkin Nikolay Mahotkin Nina Goradia Nishant Kumar Noa Koffman Noa Koffman Oleksii Chuprykov OpenStack Release Bot PanFengyun Pierre-Arthur MATHIEU Pradeep Kilambi Prince Katiyar Rajiv Kumar Ray Chen Renat Akhmerov Renat Akhmerov Renato Recio Rinat Sabitov Roman Dobosz Ryan Brady Sergey Kolekonov Sergey Murashov Shaik Apsar Sharat Sharat Sharma Shuquan Huang Spencer Yu Steven Hardy Thierry Carrez Thomas Goirand Thomas Herve Timur Nurlygayanov TimurNurlygayanov Toure Dunnon TuanLuong Van Hung Pham Venkata Mahesh Jonnalagadda Vitalii Solodilov Vu Cong Tuan W Chan Winson Chan Winson Chan Xavier Hardy XieYingYun Yaroslav Lobankov Zane Bitter Zhao Lei Zhenguo Niu ZhiQiang Fan ZhiQiang Fan Zuul avnish bhavenst byhan caoyue chenaidong1 cheneydc chenjiao chenxiangui csatari dharmendra dzimine fengchaoyang gecong1973 gengchc2 ghanshyam hardik hardikj hnyang howardlee hparekh int32bit junboli keliang kennedda kong liu-sheng liuyamin lixinhui loooosy lvdongbing manasdk noakoffman pawnesh.kumar pengdake <19921207pq@gmail.com> rajat29 rakhmerov ravikiran rico.lin ricolin rsritesh shubhendu syed ahsan shamim zaidi tengqm venkatamahesh wangxu wangzhh wudong xpress yong sheng gong ypbao yushangbin zhangdetong zhangguoqing zhangyanxian zhangyanxian zhu.rong mistral-6.0.0/functionaltests/0000775000175100017510000000000013245513604016444 5ustar zuulzuul00000000000000mistral-6.0.0/functionaltests/run_tests.sh0000777000175100017510000000277313245513261021043 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # How many seconds to wait for the API to be responding before giving up API_RESPONDING_TIMEOUT=20 if ! timeout ${API_RESPONDING_TIMEOUT} sh -c "until curl --output /dev/null --silent --head --fail http://localhost:8989; do sleep 1; done"; then echo "Mistral API failed to respond within ${API_RESPONDING_TIMEOUT} seconds" exit 1 fi echo "Successfully contacted Mistral API" # Where tempest code lives TEMPEST_DIR=${TEMPEST_DIR:-/opt/stack/tempest} # Path to directory with tempest.conf file, otherwise it will # take relative path from where the run tests command is being executed. export TEMPEST_CONFIG_DIR=${TEMPEST_CONFIG_DIR:-$TEMPEST_DIR/etc/} echo "Tempest configuration file directory: $TEMPEST_CONFIG_DIR" # Where mistral code and mistralclient code live MISTRAL_DIR=/opt/stack/mistral MISTRALCLIENT_DIR=/opt/stack/python-mistralclient # Define PYTHONPATH export PYTHONPATH=$PYTHONPATH:$TEMPEST_DIR pwd nosetests -sv mistral_tempest_tests/tests/ mistral-6.0.0/functionaltests/post_test_hook.sh0000777000175100017510000000233713245513261022055 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This script is executed inside post_test_hook function in devstack gate. set -ex sudo chmod -R a+rw /opt/stack/ (cd $BASE/new/tempest/; sudo virtualenv .venv) source $BASE/new/tempest/.venv/bin/activate (cd $BASE/new/tempest/; sudo pip install -r requirements.txt -r test-requirements.txt) sudo pip install nose sudo pip install numpy sudo cp $BASE/new/tempest/etc/logging.conf.sample $BASE/new/tempest/etc/logging.conf (cd $BASE/new/mistral/; sudo pip install -r requirements.txt -r test-requirements.txt) (cd $BASE/new/mistral/; sudo python setup.py install) export TOX_TESTENV_PASSENV=ZUUL_PROJECT (cd $BASE/new/tempest/; sudo -E tox -evenv-tempest -- tempest run -r mistral) mistral-6.0.0/README.rst0000666000175100017510000002101413245513261014705 0ustar zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/mistral.svg :target: https://governance.openstack.org/tc/reference/tags/index.html Mistral ======= Workflow Service for OpenStack cloud. This project aims to provide a mechanism to define tasks and workflows without writing code, manage and execute them in the cloud environment. Installation ~~~~~~~~~~~~ The following are the steps to install Mistral on debian-based systems. To install Mistral, you have to install the following prerequisites:: $ apt-get install python-dev python-setuptools libffi-dev \ libxslt1-dev libxml2-dev libyaml-dev libssl-dev **Mistral can be used without authentication at all or it can work with OpenStack.** In case of OpenStack, it works **only with Keystone v3**, make sure **Keystone v3** is installed. Install Mistral --------------- First of all, clone the repo and go to the repo directory:: $ git clone https://git.openstack.org/openstack/mistral.git $ cd mistral **Devstack installation** Information about how to install Mistral with devstack can be found `here `_. Configuring Mistral ~~~~~~~~~~~~~~~~~~~ Mistral configuration is needed for getting it work correctly with and without an OpenStack environment. #. Install and configure a database which can be *MySQL* or *PostgreSQL* (**SQLite can't be used in production.**). Here are the steps to connect Mistral to a *MySQL* database. * Make sure you have installed ``mysql-server`` package on your Mistral machine. * Install *MySQL driver* for python:: $ pip install mysql-python or, if you work in virtualenv, run:: $ tox -evenv -- pip install mysql-python NOTE: If you're using Python 3 then you need to install ``mysqlclient`` instead of ``mysql-python``. * Create the database and grant privileges:: $ mysql -u root -p mysql> CREATE DATABASE mistral; mysql> USE mistral mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'localhost' \ IDENTIFIED BY 'MISTRAL_DBPASS'; mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'%' IDENTIFIED BY 'MISTRAL_DBPASS'; #. Generate ``mistral.conf`` file:: $ oslo-config-generator --config-file tools/config/config-generator.mistral.conf \ --output-file etc/mistral.conf.sample #. Copy service configuration files:: $ sudo mkdir /etc/mistral $ sudo chown `whoami` /etc/mistral $ cp etc/event_definitions.yml.sample /etc/mistral/event_definitions.yml $ cp etc/logging.conf.sample /etc/mistral/logging.conf $ cp etc/policy.json /etc/mistral/policy.json $ cp etc/wf_trace_logging.conf.sample /etc/mistral/wf_trace_logging.conf $ cp etc/mistral.conf.sample /etc/mistral/mistral.conf #. Edit file ``/etc/mistral/mistral.conf`` according to your setup. Pay attention to the following sections and options:: [oslo_messaging_rabbit] rabbit_host = rabbit_userid = rabbit_password = [database] # Use the following line if *PostgreSQL* is used # connection = postgresql://:@localhost:5432/mistral connection = mysql://:@localhost:3306/mistral #. If you are not using OpenStack, add the following entry to the ``/etc/mistral/mistral.conf`` file and **skip the following steps**:: [pecan] auth_enable = False #. Provide valid keystone auth properties:: [keystone_authtoken] auth_uri = http://keystone-host:port/v3 auth_url = http://keystone-host:port auth_type = password username = password = user_domain_name = project_name = project_domain_name = #. Register Mistral service and Mistral endpoints on Keystone:: $ MISTRAL_URL="http://[host]:[port]/v2" $ openstack service create --name mistral workflowv2 $ openstack endpoint create mistral public $MISTRAL_URL $ openstack endpoint create mistral internal $MISTRAL_URL $ openstack endpoint create mistral admin $MISTRAL_URL #. Update the ``mistral/actions/openstack/mapping.json`` file which contains all available OpenStack actions, according to the specific client versions of OpenStack projects in your deployment. Please find more detailed information in the ``tools/get_action_list.py`` script. Before the First Run -------------------- After local installation you will find the commands ``mistral-server`` and ``mistral-db-manage`` available in your environment. The ``mistral-db-manage`` command can be used for migrating database schema versions. If Mistral is not installed in system then this script can be found at ``mistral/db/sqlalchemy/migration/cli.py``, it can be executed using Python command line. To update the database schema to the latest revision, type:: $ mistral-db-manage --config-file upgrade head To populate the database with standard actions and workflows, type:: $ mistral-db-manage --config-file populate For more detailed information about ``mistral-db-manage`` script please check file ``mistral/db/sqlalchemy/migration/alembic_migrations/README.md``. Running Mistral API server -------------------------- To run Mistral API server:: $ tox -evenv -- python mistral/cmd/launch.py --server api --config-file Running Mistral Engines ----------------------- To run Mistral Engine:: $ tox -evenv -- python mistral/cmd/launch.py --server engine --config-file Running Mistral Task Executors ------------------------------ To run Mistral Task Executor instance:: $ tox -evenv -- python mistral/cmd/launch.py --server executor --config-file Note that at least one Engine instance and one Executor instance should be running in order for workflow tasks to be processed by Mistral. If you want to run some tasks on specific executor, the *task affinity* feature can be used to send these tasks directly to a specific executor. You can edit the following property in your mistral configuration file for this purpose:: [executor] host = my_favorite_executor After changing this option, you will need to start (restart) the executor. Use the ``target`` property of a task to specify the executor:: ... Workflow YAML ... task1: ... target: my_favorite_executor ... Workflow YAML ... Running Multiple Mistral Servers Under the Same Process ------------------------------------------------------- To run more than one server (API, Engine, or Task Executor) on the same process:: $ tox -evenv -- python mistral/cmd/launch.py --server api,engine --config-file The value for the ``--server`` option can be a comma-delimited list. The valid options are ``all`` (which is the default if not specified) or any combination of ``api``, ``engine``, and ``executor``. It's important to note that the ``fake`` transport for the ``rpc_backend`` defined in the configuration file should only be used if ``all`` Mistral servers are launched on the same process. Otherwise, messages do not get delivered because the ``fake`` transport is using an in-process queue. Project Goals 2018 ------------------ #. **Complete Mistral documentation**. Mistral documentation should be more usable. It requires focused work to make it well structured, eliminate gaps in API/Mistral Workflow Language specifications, add more examples and tutorials. *Definition of done*: All capabilities are covered, all documentation topics are written using the same style and structure principles. The obvious sub-goal of this goal is to establish these principles. #. **Finish Mistral multi-node mode**. Mistral needs to be proven to work reliably in multi-node mode. In order to achieve it we need to make a number of engine, executor and RPC changes and configure a CI gate to run stress tests on multi-node Mistral. *Definition of done*: CI gate supports MySQL, all critically important functionality (join, with-items, parallel workflows, sequential workflows) is covered by tests. Project Resources ----------------- * `Mistral Official Documentation `_ * Project status, bugs, and blueprints are tracked on `Launchpad `_ * Additional resources are linked from the project `Wiki `_ page * Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 mistral-6.0.0/.coveragerc0000666000175100017510000000014413245513261015340 0ustar zuulzuul00000000000000[run] branch = True source = mistral omit = .tox/* mistral/tests/* [report] ignore_errors = True mistral-6.0.0/api-ref/0000775000175100017510000000000013245513604014542 5ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/0000775000175100017510000000000013245513604016042 5ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/v2/0000775000175100017510000000000013245513604016371 5ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/v2/workbook.inc0000666000175100017510000000000013245513261020710 0ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/v2/action.inc0000666000175100017510000000000013245513261020330 0ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/v2/cron-trigger.inc0000666000175100017510000000000013245513261021455 0ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/v2/workflow.inc0000666000175100017510000000000013245513261020725 0ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/v2/task.inc0000666000175100017510000000000013245513261020015 0ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/v2/execution.inc0000666000175100017510000000000013245513261021056 0ustar zuulzuul00000000000000mistral-6.0.0/api-ref/source/conf.py0000666000175100017510000001032113245513261017337 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import subprocess import sys on_rtd = os.environ.get('READTHEDOCS', None) == 'True' # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinxcontrib.autohttp.flask', 'sphinxcontrib.pecanwsme.rest', ] if not on_rtd: extensions.append('oslosphinx') wsme_protocols = ['restjson'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Workflow Service API Reference' copyright = u'2017, Mistral Contributors' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. from mistral.version import version_info release = version_info.release_string() version = version_info.version_string() # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_static_path = ['_static'] if on_rtd: html_theme_path = ['.'] html_theme = 'sphinx_rtd_theme' # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['mistral.'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local", "-n1"] html_last_updated_fmt = subprocess.check_output( git_cmd).decode('utf-8') # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". html_title = 'Mistral API Reference' # Custom sidebar templates, maps document names to template names. html_sidebars = { 'index': [ 'sidebarlinks.html', 'localtoc.html', 'searchbox.html', 'sourcelink.html' ], '**': [ 'localtoc.html', 'relations.html', 'searchbox.html', 'sourcelink.html' ] } # -- Options for manual page output ------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'mistral', u'Mistral', [u'OpenStack Foundation'], 1) ] # If true, show URL addresses after external links. man_show_urls = True mistral-6.0.0/api-ref/source/index.rst0000666000175100017510000000014113245513261017700 0ustar zuulzuul00000000000000=============================== OpenStack Workflow Service APIs =============================== mistral-6.0.0/setup.cfg0000666000175100017510000000707413245513605015053 0ustar zuulzuul00000000000000[metadata] name = mistral summary = Mistral Project description-file = README.rst license = Apache License, Version 2.0 home-page = https://docs.openstack.org/mistral/latest/ classifiers = Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux author = OpenStack author-email = openstack-dev@lists.openstack.org [files] packages = mistral mistral_tempest_tests [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 warning-is-error = 1 [upload_sphinx] upload-dir = doc/build/html [entry_points] console_scripts = mistral-server = mistral.cmd.launch:main mistral-db-manage = mistral.db.sqlalchemy.migration.cli:main wsgi_scripts = mistral-wsgi-api = mistral.api.app:init_wsgi mistral.rpc.backends = oslo_client = mistral.rpc.oslo.oslo_client:OsloRPCClient oslo_server = mistral.rpc.oslo.oslo_server:OsloRPCServer kombu_client = mistral.rpc.kombu.kombu_client:KombuRPCClient kombu_server = mistral.rpc.kombu.kombu_server:KombuRPCServer oslo.config.opts = mistral.config = mistral.config:list_opts oslo.config.opts.defaults = mistral.config = mistral.config:set_cors_middleware_defaults oslo.policy.policies = mistral = mistral.policies:list_rules mistral.actions = std.async_noop = mistral.actions.std_actions:AsyncNoOpAction std.noop = mistral.actions.std_actions:NoOpAction std.fail = mistral.actions.std_actions:FailAction std.echo = mistral.actions.std_actions:EchoAction std.http = mistral.actions.std_actions:HTTPAction std.mistral_http = mistral.actions.std_actions:MistralHTTPAction std.ssh = mistral.actions.std_actions:SSHAction std.ssh_proxied = mistral.actions.std_actions:SSHProxiedAction std.email = mistral.actions.std_actions:SendEmailAction std.javascript = mistral.actions.std_actions:JavaScriptAction std.js = mistral.actions.std_actions:JavaScriptAction std.sleep = mistral.actions.std_actions:SleepAction std.test_dict = mistral.actions.std_actions:TestDictAction mistral.executors = local = mistral.executors.default_executor:DefaultExecutor remote = mistral.executors.remote_executor:RemoteExecutor mistral.expression.functions = # json_pp was deprecated in Queens and will be removed in the S cycle json_pp = mistral.utils.expression_utils:json_pp_ env = mistral.utils.expression_utils:env_ execution = mistral.utils.expression_utils:execution_ executions = mistral.utils.expression_utils:executions_ global = mistral.utils.expression_utils:global_ json_parse = mistral.utils.expression_utils:json_parse_ json_dump = mistral.utils.expression_utils:json_dump_ task = mistral.utils.expression_utils:task_ tasks = mistral.utils.expression_utils:tasks_ uuid = mistral.utils.expression_utils:uuid_ yaml_parse = mistral.utils.expression_utils:yaml_parse_ yaml_dump = mistral.utils.expression_utils:yaml_dump_ mistral.expression.evaluators = yaql = mistral.expressions.yaql_expression:InlineYAQLEvaluator jinja = mistral.expressions.jinja_expression:InlineJinjaEvaluator mistral.auth = keystone = mistral.auth.keystone:KeystoneAuthHandler keycloak-oidc = mistral.auth.keycloak:KeycloakAuthHandler kombu_driver.executors = blocking = futurist:SynchronousExecutor threading = futurist:ThreadPoolExecutor pygments.lexers = mistral = mistral.ext.pygmentplugin:MistralLexer [egg_info] tag_build = tag_date = 0 mistral-6.0.0/rally-jobs/0000775000175100017510000000000013245513604015275 5ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/task-mistral.yaml0000666000175100017510000000655013245513262020604 0ustar zuulzuul00000000000000{% set extra_dir = extra_dir or env["RALLY_EXTRA_DIR"] %} --- MistralWorkbooks.list_workbooks: - runner: type: "constant" times: 50 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 MistralWorkbooks.create_workbook: - args: definition: "{{ extra_dir }}/mistral_wb.yaml" runner: type: "constant" times: 50 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 - args: definition: "{{ extra_dir }}/mistral_wb.yaml" do_delete: true runner: type: "constant" times: 50 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 MistralExecutions.list_executions: - runner: type: "constant" times: 50 concurrency: 10 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 MistralExecutions.create_execution_from_workbook: - args: definition: "{{ extra_dir }}/mistral_wb.yaml" do_delete: true runner: type: "constant" times: 20 concurrency: 5 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 - args: definition: "{{ extra_dir }}/nested_wb.yaml" workflow_name: "wrapping_wf" do_delete: true runner: type: "constant" times: 20 concurrency: 5 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 - args: definition: "{{ extra_dir }}/scenarios/complex_wf/complex_wf_wb.yaml" workflow_name: "top_level_workflow" params: "{{ extra_dir }}/scenarios/complex_wf/complex_wf_params.json" do_delete: true runner: type: "constant" times: 20 concurrency: 5 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 - args: definition: "{{ extra_dir }}/scenarios/with_items/wb.yaml" params: "{{ extra_dir }}/scenarios/with_items/count_100_concurrency_10.json" do_delete: true runner: type: "constant" times: 20 concurrency: 5 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 - args: definition: "{{ extra_dir }}/scenarios/join/join_100_wb.yaml" do_delete: true runner: type: "constant" times: 20 concurrency: 5 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0 - args: definition: "{{ extra_dir }}/scenarios/join/join_500_wb.yaml" do_delete: true runner: type: "constant" times: 10 concurrency: 1 context: users: tenants: 1 users_per_tenant: 1 sla: failure_rate: max: 0mistral-6.0.0/rally-jobs/plugins/0000775000175100017510000000000013245513604016756 5ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/plugins/README.rst0000666000175100017510000000060613245513262020451 0ustar zuulzuul00000000000000Rally plugins ============= All *.py modules from this directory will be auto-loaded by Rally and all plugins will be discoverable. There is no need of any extra configuration and there is no difference between writing them here and in rally code base. Note that it is better to push all interesting and useful benchmarks to Rally code base, this simplifies administration for Operators. mistral-6.0.0/rally-jobs/plugins/__init__.py0000666000175100017510000000000013245513262021057 0ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/README.rst0000666000175100017510000000166013245513262016771 0ustar zuulzuul00000000000000Rally job related files ======================= This directory contains rally tasks and plugins that are run by OpenStack CI. Structure --------- * task-mistral.yaml is a task that will be run in gates against OpenStack deployed by DevStack with installed Rally & Mistral. * plugins - directory where you can add rally plugins. Almost everything in Rally is plugin. Benchmark context, Benchmark scenario, SLA checks, Generic cleanup resources, .... * extra - all files from this directory will be copy pasted to gates, so you are able to use absolute path in rally tasks. Files will be in ~/.rally/extra/* Useful links ------------ * More about rally: https://rally.readthedocs.org/en/latest/ * How to add rally-gates: https://rally.readthedocs.org/en/latest/gates.html * About plugins: https://rally.readthedocs.org/en/latest/plugins.html * Plugin samples: https://github.com/openstack/rally/tree/master/samples/plugins mistral-6.0.0/rally-jobs/extra/0000775000175100017510000000000013245513604016420 5ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/extra/README.rst0000666000175100017510000000025413245513262020112 0ustar zuulzuul00000000000000Extra files =========== All files from this directory will be copy pasted to gates, so you are able to use absolute path in rally tasks. Files will be in ~/.rally/extra/* mistral-6.0.0/rally-jobs/extra/mistral_wb.yaml0000666000175100017510000000025013245513262021446 0ustar zuulzuul00000000000000--- version: "2.0" name: wb workflows: wf1: type: direct tasks: hello: action: std.echo output="Hello" publish: result: $ mistral-6.0.0/rally-jobs/extra/nested_wb.yaml0000666000175100017510000000255513245513262021267 0ustar zuulzuul00000000000000--- version: "2.0" name: wb workflows: wrapping_wf: type: direct tasks: call_inner_wf_1: workflow: inner_wf call_inner_wf_2: workflow: inner_wf call_inner_wf_3: workflow: inner_wf call_inner_wf_4: workflow: inner_wf inner_wf: type: direct tasks: hello1: action: std.echo output="Hello" publish: result: $ hello2: action: std.echo output="Hello" publish: result: $ on-success: - world hello3: action: std.echo output="Hello" publish: result: $ on-success: - world hello4: action: std.echo output="Hello" publish: result: $ on-success: - world world: action: std.echo output="World" join: all publish: result: $ on-success: - test1 - test2 - test3 - test4 test1: action: std.echo output="Test!!" publish: result: $ test2: action: std.echo output="Test!!" publish: result: $ test3: action: std.echo output="Test!!" publish: result: $ test4: action: std.echo output="Test!!" publish: result: $ mistral-6.0.0/rally-jobs/extra/scenarios/0000775000175100017510000000000013245513604020406 5ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/extra/scenarios/join/0000775000175100017510000000000013245513604021345 5ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/extra/scenarios/join/join_100_wb.yaml0000666000175100017510000001106213245513262024242 0ustar zuulzuul00000000000000--- version: '2.0' name: join_100_wb workflows: wf: description: contains "join" that joins 100 parallel tasks tasks: join_task: join: all task_1: on-success: join_task task_2: on-success: join_task task_3: on-success: join_task task_4: on-success: join_task task_5: on-success: join_task task_6: on-success: join_task task_7: on-success: join_task task_8: on-success: join_task task_9: on-success: join_task task_10: on-success: join_task task_11: on-success: join_task task_12: on-success: join_task task_13: on-success: join_task task_14: on-success: join_task task_15: on-success: join_task task_16: on-success: join_task task_17: on-success: join_task task_18: on-success: join_task task_19: on-success: join_task task_20: on-success: join_task task_21: on-success: join_task task_22: on-success: join_task task_23: on-success: join_task task_24: on-success: join_task task_25: on-success: join_task task_26: on-success: join_task task_27: on-success: join_task task_28: on-success: join_task task_29: on-success: join_task task_30: on-success: join_task task_31: on-success: join_task task_32: on-success: join_task task_33: on-success: join_task task_34: on-success: join_task task_35: on-success: join_task task_36: on-success: join_task task_37: on-success: join_task task_38: on-success: join_task task_39: on-success: join_task task_40: on-success: join_task task_41: on-success: join_task task_42: on-success: join_task task_43: on-success: join_task task_44: on-success: join_task task_45: on-success: join_task task_46: on-success: join_task task_47: on-success: join_task task_48: on-success: join_task task_49: on-success: join_task task_50: on-success: join_task task_51: on-success: join_task task_52: on-success: join_task task_53: on-success: join_task task_54: on-success: join_task task_55: on-success: join_task task_56: on-success: join_task task_57: on-success: join_task task_58: on-success: join_task task_59: on-success: join_task task_60: on-success: join_task task_61: on-success: join_task task_62: on-success: join_task task_63: on-success: join_task task_64: on-success: join_task task_65: on-success: join_task task_66: on-success: join_task task_67: on-success: join_task task_68: on-success: join_task task_69: on-success: join_task task_70: on-success: join_task task_71: on-success: join_task task_72: on-success: join_task task_73: on-success: join_task task_74: on-success: join_task task_75: on-success: join_task task_76: on-success: join_task task_77: on-success: join_task task_78: on-success: join_task task_79: on-success: join_task task_80: on-success: join_task task_81: on-success: join_task task_82: on-success: join_task task_83: on-success: join_task task_84: on-success: join_task task_85: on-success: join_task task_86: on-success: join_task task_87: on-success: join_task task_88: on-success: join_task task_89: on-success: join_task task_90: on-success: join_task task_91: on-success: join_task task_92: on-success: join_task task_93: on-success: join_task task_94: on-success: join_task task_95: on-success: join_task task_96: on-success: join_task task_97: on-success: join_task task_98: on-success: join_task task_99: on-success: join_task task_100: on-success: join_task mistral-6.0.0/rally-jobs/extra/scenarios/join/join_500_wb.yaml0000666000175100017510000005502213245513262024252 0ustar zuulzuul00000000000000--- version: '2.0' name: join_500_wb workflows: wf: description: contains "join" that joins 500 parallel tasks tasks: join_task: join: all task_1: on-success: join_task task_2: on-success: join_task task_3: on-success: join_task task_4: on-success: join_task task_5: on-success: join_task task_6: on-success: join_task task_7: on-success: join_task task_8: on-success: join_task task_9: on-success: join_task task_10: on-success: join_task task_11: on-success: join_task task_12: on-success: join_task task_13: on-success: join_task task_14: on-success: join_task task_15: on-success: join_task task_16: on-success: join_task task_17: on-success: join_task task_18: on-success: join_task task_19: on-success: join_task task_20: on-success: join_task task_21: on-success: join_task task_22: on-success: join_task task_23: on-success: join_task task_24: on-success: join_task task_25: on-success: join_task task_26: on-success: join_task task_27: on-success: join_task task_28: on-success: join_task task_29: on-success: join_task task_30: on-success: join_task task_31: on-success: join_task task_32: on-success: join_task task_33: on-success: join_task task_34: on-success: join_task task_35: on-success: join_task task_36: on-success: join_task task_37: on-success: join_task task_38: on-success: join_task task_39: on-success: join_task task_40: on-success: join_task task_41: on-success: join_task task_42: on-success: join_task task_43: on-success: join_task task_44: on-success: join_task task_45: on-success: join_task task_46: on-success: join_task task_47: on-success: join_task task_48: on-success: join_task task_49: on-success: join_task task_50: on-success: join_task task_51: on-success: join_task task_52: on-success: join_task task_53: on-success: join_task task_54: on-success: join_task task_55: on-success: join_task task_56: on-success: join_task task_57: on-success: join_task task_58: on-success: join_task task_59: on-success: join_task task_60: on-success: join_task task_61: on-success: join_task task_62: on-success: join_task task_63: on-success: join_task task_64: on-success: join_task task_65: on-success: join_task task_66: on-success: join_task task_67: on-success: join_task task_68: on-success: join_task task_69: on-success: join_task task_70: on-success: join_task task_71: on-success: join_task task_72: on-success: join_task task_73: on-success: join_task task_74: on-success: join_task task_75: on-success: join_task task_76: on-success: join_task task_77: on-success: join_task task_78: on-success: join_task task_79: on-success: join_task task_80: on-success: join_task task_81: on-success: join_task task_82: on-success: join_task task_83: on-success: join_task task_84: on-success: join_task task_85: on-success: join_task task_86: on-success: join_task task_87: on-success: join_task task_88: on-success: join_task task_89: on-success: join_task task_90: on-success: join_task task_91: on-success: join_task task_92: on-success: join_task task_93: on-success: join_task task_94: on-success: join_task task_95: on-success: join_task task_96: on-success: join_task task_97: on-success: join_task task_98: on-success: join_task task_99: on-success: join_task task_100: on-success: join_task task_101: on-success: join_task task_102: on-success: join_task task_103: on-success: join_task task_104: on-success: join_task task_105: on-success: join_task task_106: on-success: join_task task_107: on-success: join_task task_108: on-success: join_task task_109: on-success: join_task task_110: on-success: join_task task_111: on-success: join_task task_112: on-success: join_task task_113: on-success: join_task task_114: on-success: join_task task_115: on-success: join_task task_116: on-success: join_task task_117: on-success: join_task task_118: on-success: join_task task_119: on-success: join_task task_120: on-success: join_task task_121: on-success: join_task task_122: on-success: join_task task_123: on-success: join_task task_124: on-success: join_task task_125: on-success: join_task task_126: on-success: join_task task_127: on-success: join_task task_128: on-success: join_task task_129: on-success: join_task task_130: on-success: join_task task_131: on-success: join_task task_132: on-success: join_task task_133: on-success: join_task task_134: on-success: join_task task_135: on-success: join_task task_136: on-success: join_task task_137: on-success: join_task task_138: on-success: join_task task_139: on-success: join_task task_140: on-success: join_task task_141: on-success: join_task task_142: on-success: join_task task_143: on-success: join_task task_144: on-success: join_task task_145: on-success: join_task task_146: on-success: join_task task_147: on-success: join_task task_148: on-success: join_task task_149: on-success: join_task task_150: on-success: join_task task_151: on-success: join_task task_152: on-success: join_task task_153: on-success: join_task task_154: on-success: join_task task_155: on-success: join_task task_156: on-success: join_task task_157: on-success: join_task task_158: on-success: join_task task_159: on-success: join_task task_160: on-success: join_task task_161: on-success: join_task task_162: on-success: join_task task_163: on-success: join_task task_164: on-success: join_task task_165: on-success: join_task task_166: on-success: join_task task_167: on-success: join_task task_168: on-success: join_task task_169: on-success: join_task task_170: on-success: join_task task_171: on-success: join_task task_172: on-success: join_task task_173: on-success: join_task task_174: on-success: join_task task_175: on-success: join_task task_176: on-success: join_task task_177: on-success: join_task task_178: on-success: join_task task_179: on-success: join_task task_180: on-success: join_task task_181: on-success: join_task task_182: on-success: join_task task_183: on-success: join_task task_184: on-success: join_task task_185: on-success: join_task task_186: on-success: join_task task_187: on-success: join_task task_188: on-success: join_task task_189: on-success: join_task task_190: on-success: join_task task_191: on-success: join_task task_192: on-success: join_task task_193: on-success: join_task task_194: on-success: join_task task_195: on-success: join_task task_196: on-success: join_task task_197: on-success: join_task task_198: on-success: join_task task_199: on-success: join_task task_200: on-success: join_task task_201: on-success: join_task task_202: on-success: join_task task_203: on-success: join_task task_204: on-success: join_task task_205: on-success: join_task task_206: on-success: join_task task_207: on-success: join_task task_208: on-success: join_task task_209: on-success: join_task task_210: on-success: join_task task_211: on-success: join_task task_212: on-success: join_task task_213: on-success: join_task task_214: on-success: join_task task_215: on-success: join_task task_216: on-success: join_task task_217: on-success: join_task task_218: on-success: join_task task_219: on-success: join_task task_220: on-success: join_task task_221: on-success: join_task task_222: on-success: join_task task_223: on-success: join_task task_224: on-success: join_task task_225: on-success: join_task task_226: on-success: join_task task_227: on-success: join_task task_228: on-success: join_task task_229: on-success: join_task task_230: on-success: join_task task_231: on-success: join_task task_232: on-success: join_task task_233: on-success: join_task task_234: on-success: join_task task_235: on-success: join_task task_236: on-success: join_task task_237: on-success: join_task task_238: on-success: join_task task_239: on-success: join_task task_240: on-success: join_task task_241: on-success: join_task task_242: on-success: join_task task_243: on-success: join_task task_244: on-success: join_task task_245: on-success: join_task task_246: on-success: join_task task_247: on-success: join_task task_248: on-success: join_task task_249: on-success: join_task task_250: on-success: join_task task_251: on-success: join_task task_252: on-success: join_task task_253: on-success: join_task task_254: on-success: join_task task_255: on-success: join_task task_256: on-success: join_task task_257: on-success: join_task task_258: on-success: join_task task_259: on-success: join_task task_260: on-success: join_task task_261: on-success: join_task task_262: on-success: join_task task_263: on-success: join_task task_264: on-success: join_task task_265: on-success: join_task task_266: on-success: join_task task_267: on-success: join_task task_268: on-success: join_task task_269: on-success: join_task task_270: on-success: join_task task_271: on-success: join_task task_272: on-success: join_task task_273: on-success: join_task task_274: on-success: join_task task_275: on-success: join_task task_276: on-success: join_task task_277: on-success: join_task task_278: on-success: join_task task_279: on-success: join_task task_280: on-success: join_task task_281: on-success: join_task task_282: on-success: join_task task_283: on-success: join_task task_284: on-success: join_task task_285: on-success: join_task task_286: on-success: join_task task_287: on-success: join_task task_288: on-success: join_task task_289: on-success: join_task task_290: on-success: join_task task_291: on-success: join_task task_292: on-success: join_task task_293: on-success: join_task task_294: on-success: join_task task_295: on-success: join_task task_296: on-success: join_task task_297: on-success: join_task task_298: on-success: join_task task_299: on-success: join_task task_300: on-success: join_task task_301: on-success: join_task task_302: on-success: join_task task_303: on-success: join_task task_304: on-success: join_task task_305: on-success: join_task task_306: on-success: join_task task_307: on-success: join_task task_308: on-success: join_task task_309: on-success: join_task task_310: on-success: join_task task_311: on-success: join_task task_312: on-success: join_task task_313: on-success: join_task task_314: on-success: join_task task_315: on-success: join_task task_316: on-success: join_task task_317: on-success: join_task task_318: on-success: join_task task_319: on-success: join_task task_320: on-success: join_task task_321: on-success: join_task task_322: on-success: join_task task_323: on-success: join_task task_324: on-success: join_task task_325: on-success: join_task task_326: on-success: join_task task_327: on-success: join_task task_328: on-success: join_task task_329: on-success: join_task task_330: on-success: join_task task_331: on-success: join_task task_332: on-success: join_task task_333: on-success: join_task task_334: on-success: join_task task_335: on-success: join_task task_336: on-success: join_task task_337: on-success: join_task task_338: on-success: join_task task_339: on-success: join_task task_340: on-success: join_task task_341: on-success: join_task task_342: on-success: join_task task_343: on-success: join_task task_344: on-success: join_task task_345: on-success: join_task task_346: on-success: join_task task_347: on-success: join_task task_348: on-success: join_task task_349: on-success: join_task task_350: on-success: join_task task_351: on-success: join_task task_352: on-success: join_task task_353: on-success: join_task task_354: on-success: join_task task_355: on-success: join_task task_356: on-success: join_task task_357: on-success: join_task task_358: on-success: join_task task_359: on-success: join_task task_360: on-success: join_task task_361: on-success: join_task task_362: on-success: join_task task_363: on-success: join_task task_364: on-success: join_task task_365: on-success: join_task task_366: on-success: join_task task_367: on-success: join_task task_368: on-success: join_task task_369: on-success: join_task task_370: on-success: join_task task_371: on-success: join_task task_372: on-success: join_task task_373: on-success: join_task task_374: on-success: join_task task_375: on-success: join_task task_376: on-success: join_task task_377: on-success: join_task task_378: on-success: join_task task_379: on-success: join_task task_380: on-success: join_task task_381: on-success: join_task task_382: on-success: join_task task_383: on-success: join_task task_384: on-success: join_task task_385: on-success: join_task task_386: on-success: join_task task_387: on-success: join_task task_388: on-success: join_task task_389: on-success: join_task task_390: on-success: join_task task_391: on-success: join_task task_392: on-success: join_task task_393: on-success: join_task task_394: on-success: join_task task_395: on-success: join_task task_396: on-success: join_task task_397: on-success: join_task task_398: on-success: join_task task_399: on-success: join_task task_400: on-success: join_task task_401: on-success: join_task task_402: on-success: join_task task_403: on-success: join_task task_404: on-success: join_task task_405: on-success: join_task task_406: on-success: join_task task_407: on-success: join_task task_408: on-success: join_task task_409: on-success: join_task task_410: on-success: join_task task_411: on-success: join_task task_412: on-success: join_task task_413: on-success: join_task task_414: on-success: join_task task_415: on-success: join_task task_416: on-success: join_task task_417: on-success: join_task task_418: on-success: join_task task_419: on-success: join_task task_420: on-success: join_task task_421: on-success: join_task task_422: on-success: join_task task_423: on-success: join_task task_424: on-success: join_task task_425: on-success: join_task task_426: on-success: join_task task_427: on-success: join_task task_428: on-success: join_task task_429: on-success: join_task task_430: on-success: join_task task_431: on-success: join_task task_432: on-success: join_task task_433: on-success: join_task task_434: on-success: join_task task_435: on-success: join_task task_436: on-success: join_task task_437: on-success: join_task task_438: on-success: join_task task_439: on-success: join_task task_440: on-success: join_task task_441: on-success: join_task task_442: on-success: join_task task_443: on-success: join_task task_444: on-success: join_task task_445: on-success: join_task task_446: on-success: join_task task_447: on-success: join_task task_448: on-success: join_task task_449: on-success: join_task task_450: on-success: join_task task_451: on-success: join_task task_452: on-success: join_task task_453: on-success: join_task task_454: on-success: join_task task_455: on-success: join_task task_456: on-success: join_task task_457: on-success: join_task task_458: on-success: join_task task_459: on-success: join_task task_460: on-success: join_task task_461: on-success: join_task task_462: on-success: join_task task_463: on-success: join_task task_464: on-success: join_task task_465: on-success: join_task task_466: on-success: join_task task_467: on-success: join_task task_468: on-success: join_task task_469: on-success: join_task task_470: on-success: join_task task_471: on-success: join_task task_472: on-success: join_task task_473: on-success: join_task task_474: on-success: join_task task_475: on-success: join_task task_476: on-success: join_task task_477: on-success: join_task task_478: on-success: join_task task_479: on-success: join_task task_480: on-success: join_task task_481: on-success: join_task task_482: on-success: join_task task_483: on-success: join_task task_484: on-success: join_task task_485: on-success: join_task task_486: on-success: join_task task_487: on-success: join_task task_488: on-success: join_task task_489: on-success: join_task task_490: on-success: join_task task_491: on-success: join_task task_492: on-success: join_task task_493: on-success: join_task task_494: on-success: join_task task_495: on-success: join_task task_496: on-success: join_task task_497: on-success: join_task task_498: on-success: join_task task_499: on-success: join_task task_500: on-success: join_task mistral-6.0.0/rally-jobs/extra/scenarios/complex_wf/0000775000175100017510000000000013245513604022551 5ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/extra/scenarios/complex_wf/complex_wf_params.json0000666000175100017510000000040313245513262027151 0ustar zuulzuul00000000000000{ "env": { "env_param_01": { "env_param_01_nested_01": "xyz" }, "env_param_02": { "env_param_02_nested_01": "xyz" }, "working_url": "http://httpstat.us/200", "return_error_code_url": "https://httpbin.org/status/418" } }mistral-6.0.0/rally-jobs/extra/scenarios/complex_wf/complex_wf_wb.yaml0000666000175100017510000007134013245513262026277 0ustar zuulzuul00000000000000--- version: "2.0" name: very_big_wb actions: my_action: input: - action_parameter_05: xyz - action_parameter_06: "xyz" - env - action_parameter_07: xyz - action_parameter_04: null - action_parameter_03: "xyz" - action_parameter_02: "" - action_parameter_08: "" - action_parameter_09: xyz - action_param_01: "" - action_parameter_01: "xyz" - action_parameter_10: "xyz" - output: output base: std.http base-input: url: <% $.env.working_url %> allow_redirects: true verify: 'xyz' headers: "Content-Type": "application/json" "Header1": <% $.env.env_param_01.env_param_01_nested_01 %> "Header2": <% $.env.env_param_01.env_param_01_nested_01 + ' ' + $.env.env_param_02.env_param_02_nested_01 %> method: PATCH body: xyz: <% $.action_parameter_05 %> workflows: my_workflow: input: - workflow_parameter_04: "workflow_parameter_04" - workflow_parameter_03: "workflow_parameter_03" - workflow_parameter_02: "workflow_parameter_02" - workflow_parameter_01: null - workflow_parameter_05: "workflow_parameter_05" - workflow_parameter_06: "workflow_parameter_06" output: wf_output_01: xyz wf_output_02: <% env().env_param_01 %> task-defaults: on-error: - wf_default_on_error tasks: wf_task_01: action: my_action input: env: "<% env() %>" action_parameter_02: "" action_parameter_03: xyz action_parameter_04: '<% env().env_param_02.env_param_02_nested_01 %>' on-error: - fail on-success: - wf_task_02 publish: wf_published_01: nested_01: "xyz" wf_task_02: action: std.http input: url: <% env().working_url %> method: POST allow_redirects: true headers: "Content-Type": "application/json" "X-Flow-ID": <% env().env_param_02.env_param_02_nested_01 %> "xyz": <% $.wf_published_01.nested_01 %> "Authorization": <% env().env_param_02.env_param_02_nested_01 + ' ' + env().env_param_02.env_param_02_nested_01 %> body: env_param_01_nested_01: <% env().env_param_01.env_param_01_nested_01 %> aaaaa: a1: <% $.workflow_parameter_03 %> a2: <% $.workflow_parameter_02 %> a3: <% $.workflow_parameter_01 %> a4: <% env().env_param_02 %> a5: <% $.workflow_parameter_04 %> on-success: - wf_task_05 wf_task_03: action: std.echo input: output: "<% $ %>" wf_task_04: action: std.echo input: output: "<% $ %>" on-complete: - fail wf_default_on_error: action: my_action input: action_parameter_02: '' action_parameter_01: "<% $.wf_published_01.nested_01 %>" action_parameter_03: fxyz env: "<% env() %>" on-error: - wf_task_04 on-success: - wf_task_04 wf_task_05: action: my_action input: env: "<% env() %>" action_parameter_02: "" action_parameter_01: "<% $.wf_published_01.nested_01 %>" action_parameter_03: xyz on-error: - wf_task_03 top_level_workflow: tasks: task_01_top_level: action: "std.echo" input: output: "<% $ %>" on-success: - "do_work" do_work: workflow: "big_wf" on-success: - more_work - make_sure_no_errors task_12: action: "my_action" input: env: "<% env() %>" action_parameter_06: "xxx" action_parameter_03: "<% env().env_param_01.env_param_01_nested_01 %>" action_parameter_10: "ggg" on-complete: - make_sure_no_errors more_work: action: "my_action" input: env: "<% env() %>" action_parameter_06: "xxx" action_parameter_03: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_12" make_sure_no_errors: action: std.noop publish: tasks_in_error: "<% tasks(execution().id, true, ERROR) %>" on-success: - do_fail: "<% len($.tasks_in_error) > 0 %>" do_fail: action: std.fail big_wf: input: - attribute_01: "attribute_01" - attribute_02: "attribute_02" - attribute_03: "attribute_03" - attribute_04: "attribute_04" - attribute_05: "attribute_05" - attribute_06: "attribute_06" - attribute_07: "attribute_07" - attribute_08: "attribute_08" - attribute_09: "attribute_09" - attribute_10: "attribute_10" - attribute_11: "attribute_11" - attribute_12: "attribute_12" - attribute_13: "attribute_13" - attribute_14: "attribute_14" - attribute_15: "attribute_15" - attribute_16: "attribute_16" - attribute_17: "attribute_17" - attribute_18: "attribute_18" - attribute_19: "attribute_19" - attribute_20: "attribute_20" - attribute_21: "attribute_21" - attribute_22: "attribute_22" - attribute_23: "attribute_23" - attribute_24: "attribute_24" - attribute_25: "attribute_25" - attribute_26: "attribute_26" - attribute_27: "attribute_27" - attribute_28: "attribute_28" - attribute_29: "attribute_29" - input_01: "input_01" - input_02: "input_02" - input_03: "input_03" - input_04: "input_04" - input_05: "input_05" - input_06: "input_06" - input_07: "input_07" - input_08: "input_08" - property_01: null - property_02: "*" - property_03: null - property_04: "xyz" - property_05: null - property_06: null - property_07: null - property_08: "1232" - property_09: "xyz" - property_10: "*" - property_11: "*" - property_12: "xyz" - property_13: false - property_14: null - property_15: "xyz" - property_16: null - property_17: "xyz" - property_18: null - property_19: "xyz" - property_20: "1" - property_21: "xyz" - property_22: 123 - property_23: false task-defaults: on-error: - "system_task_on_error" tasks: task_01: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_30) %>" publish: attribute_31: "<% $.get(attribute_30) %>" on-success: - "task_08" - "task_18" - "task_23" - "task_25" task_02: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_30) %>" publish: attribute_63: "<% $.get(attribute_30) %>" on-success: - "task_08" - "task_18" - "task_23" - "task_25" task_03: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_30) %>" publish: attribute_32: "<% $.get(attribute_30) %>" on-success: - "task_08" - "task_18" - "task_23" - "task_25" task_04: workflow: "my_workflow" input: workflow_parameter_01: workflow_parameter_01_nested_param: "<% $.get(input_01) %>" workflow_parameter_02: "<% $.get(attribute_01) %>" workflow_parameter_03: "<% $.get(attribute_03) %>" workflow_parameter_04: "<% $.get(attribute_02) %>" publish: task_04_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_30: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_33: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_01" - "task_02" - "task_03" - "task_04_2" task_04_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_04_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_05: workflow: "my_workflow" input: workflow_parameter_01: xyz_01: "<% $.get(property_04) %>" xyz_02: "<% $.get(attribute_34) %>" xyz_03: "<% $.get(attribute_35) %>" xyz_04: "<% $.get(attribute_36) %>" xyz_05: "<% $.get(property_06) %>" xyz_06: "<% $.get(attribute_37) %>" xyz_07: "<% $.get(property_07) %>" xyz_08: "<% $.get(attribute_38) %>" xyz_09: "<% $.get(attribute_39) %>" xyz_10: "<% $.get(attribute_40) %>" xyz_11: "<% $.get(attribute_41) %>" xyz_12: "<% $.get(attribute_42) %>" xyz_13: "<% $.get(property_02) %>" xyz_14: "<% $.get(property_01) %>" xyz_15: "<% $.get(property_05) %>" xyz_16: "<% $.get(attribute_43) %>" xyz_17: "<% $.get(attribute_44) %>" xyz_18: "<% $.get(property_03) %>" workflow_parameter_02: "<% $.get(attribute_04) %>" workflow_parameter_03: "<% $.get(attribute_06) %>" workflow_parameter_04: "<% $.get(attribute_05) %>" publish: task_05_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_45: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_46: "<% env().env_param_01.env_param_01_nested_01 %>" join: "all" on-success: - "task_05_2" - "task_23" task_05_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_05_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_06: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(property_11) %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(property_10) %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(property_09) %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(property_08) %>" publish: attribute_37: "<% $.get(property_08) %>" attribute_34: "<% $.get(property_09) %>" attribute_42: "<% $.get(property_10) %>" attribute_38: "<% $.get(property_11) %>" on-success: - "task_05" - "task_08" - "task_18" - "task_23" task_07: workflow: "my_workflow" input: workflow_parameter_01: xyz_11: "<% $.get(attribute_53) %>" workflow_parameter_02: "<% $.get(property_18) %>" workflow_parameter_03: "<% $.get(attribute_19) %>" workflow_parameter_04: "<% $.get(property_17) %>" workflow_parameter_05: "<% $.get(property_19) %>" workflow_parameter_06: "<% $.get(property_20) %>" workflow_parameter_07: "<% $.get(attribute_20) %>" publish: task_07_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" outputs: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_65: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_66: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_07_2" - "task_08" - "task_18" task_07_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_07_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_08: workflow: "my_workflow" input: workflow_parameter_01: xyz_01: "<% $.get(property_12) %>" xyz_02: "<% $.get(attribute_65) %>" xyz_03: "<% $.get(attribute_32) %>" xyz_04: "<% $.get(property_13) %>" workflow_parameter_02: "<% $.get(attribute_07) %>" workflow_parameter_03: "<% $.get(attribute_09) %>" workflow_parameter_04: "<% $.get(attribute_08) %>" publish: task_08_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_67: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_68: "<% env().env_param_01.env_param_01_nested_01 %>" join: "all" on-success: - "task_05" - "task_08_2" - "task_09" task_08_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_08_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_09: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "xyz" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_67) %>" publish: attribute_43: "xyz" attribute_44: "<% $.get(attribute_67) %>" on-success: - "task_05" - "task_23" task_10: workflow: "my_workflow" input: workflow_parameter_01: xyz_01: "<% $.get(attribute_45) %>" xyz_11: "<% $.get(attribute_46) %>" xyz_21: "<% $.get(input_01) %>" xyz_22: "<% $.get(attribute_47) %>" xyz_23: "<% $.get(attribute_48) %>" workflow_parameter_02: "<% $.get(attribute_10) %>" workflow_parameter_03: "<% $.get(attribute_12) %>" workflow_parameter_04: "<% $.get(attribute_11) %>" publish: task_10_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_49: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_50: "<% env().env_param_01.env_param_01_nested_01 %>" join: "all" on-success: - "task_07" - "task_10_2" task_10_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_10_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_11: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_51) %>" publish: attribute_46: "<% $.get(attribute_51) %>" on-success: - "task_10" - "task_23" task_12: workflow: "my_workflow" input: workflow_parameter_02: "<% $.get(attribute_13) %>" workflow_parameter_03: "<% $.get(attribute_15) %>" workflow_parameter_04: "<% $.get(attribute_14) %>" publish: task_12_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_51: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_52: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_53: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_11" - "task_12_2" task_12_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_12_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_13: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_54) %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(property_15) %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(input_02) %>" publish: attribute_47: "<% $.get(input_02) %>" attribute_45: "<% $.get(property_15) %>" attribute_48: "<% $.get(attribute_54) %>" on-success: - "task_05" - "task_10" - "task_17" - "task_23" task_14: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_54) %>" publish: attribute_55: "<% $.get(attribute_54) %>" on-success: - "task_05" - "task_10" - "task_17" - "task_23" task_15: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_17) %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_54) %>" publish: attribute_39: "<% $.get(attribute_54) %>" attribute_35: "<% $.get(attribute_17) %>" on-success: - "task_05" - "task_10" - "task_17" - "task_23" task_16: workflow: "my_workflow" input: workflow_parameter_01: xyz_01: "<% $.get(property_15) %>" xyz_02: "<% $.get(input_03) %>" xyz_03: "<% $.get(attribute_56) %>" xyz_04: "<% $.get(property_16) %>" xyz_05: "<% $.get(property_14) %>" xyz_06: "<% $.get(input_02) %>" workflow_parameter_02: "<% $.get(attribute_16) %>" workflow_parameter_03: "<% $.get(attribute_18) %>" workflow_parameter_04: "<% $.get(attribute_17) %>" publish: task_16_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_54: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_57: "<% env().env_param_01.env_param_01_nested_01 %>" join: "all" on-success: - "task_05" - "task_10" - "task_13" - "task_14" - "task_15" - "task_16_2" - "task_17" task_16_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_16_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_17: workflow: "my_workflow" input: workflow_parameter_01: xyz_01: "<% $.get(input_05) %>" xyz_02: "<% $.get(input_06) %>" xyz_03: "<% $.get(property_21) %>" xyz_04: "<% $.get(input_08) %>" xyz_05: "<% $.get(input_04) %>" xyz_06: "<% $.get(input_07) %>" xyz_07: "<% $.get(attribute_55) %>" workflow_parameter_02: "<% $.get(attribute_21) %>" workflow_parameter_03: "<% $.get(attribute_23) %>" workflow_parameter_04: "<% $.get(attribute_22) %>" publish: attribute_58: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_59: "<% env().env_param_01.env_param_01_nested_01 %>" task_17_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" join: "all" on-success: - "task_17_2" - "task_23" task_17_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_17_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_18: workflow: "my_workflow" input: workflow_parameter_01: xyz_15: "<% $.get(property_22) %>" xyz_16: "<% $.get(attribute_31) %>" workflow_parameter_02: "<% $.get(attribute_24) %>" workflow_parameter_03: "<% $.get(attribute_26) %>" workflow_parameter_04: "<% $.get(attribute_25) %>" publish: attribute_60: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_61: "<% env().env_param_01.env_param_01_nested_01 %>" task_18_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" join: "all" on-success: - "task_05" - "task_18_2" - "task_19" task_18_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_18_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" task_19: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_60) %>" publish: attribute_40: "<% $.get(attribute_60) %>" on-success: - "task_05" - "task_23" task_20: action: "my_action" input: output: "<% $ %>" on-complete: - "fail" task_21: workflow: "my_workflow" on-success: - "task_04" - "task_06" - "task_12" problematic_task: action: "my_action" input: env: "<% env() %>" action_parameter_02: "problematic_task" action_parameter_01: "<% env().env_param_02.env_param_02_nested_01 %>" action_parameter_03: "xyz" on-success: - "task_20" on-error: - "task_20" task_22: action: "my_action" input: env: "<% env() %>" action_parameter_02: "" action_parameter_01: "<% env().env_param_02.env_param_02_nested_01 %>" action_parameter_03: "xyz" on-error: - "task_20" task_23: workflow: "my_workflow" input: env: "<% env() %>" workflow_parameter_05: "xyz" workflow_parameter_06: "xyz" join: "all" on-success: - "task_22" system_task_on_error: workflow: "my_workflow" input: env: "<% env() %>" workflow_parameter_05: "xyz" workflow_parameter_06: "xyz" join: 1 on-complete: - "problematic_task" task_24: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% $.get(attribute_62) %>" publish: attribute_56: "<% $.get(attribute_62) %>" on-success: - "task_16" - "task_23" task_25: workflow: "my_workflow" input: workflow_parameter_01: xyz_01: "<% $.get(attribute_63) %>" xyz_02: "<% $.get(property_23) %>" workflow_parameter_02: "<% $.get(attribute_27) %>" workflow_parameter_03: "<% $.get(attribute_29) %>" workflow_parameter_04: "<% $.get(attribute_28) %>" publish: attribute_62: "<% env().env_param_01.env_param_01_nested_01 %>" attribute_64: "<% env().env_param_01.env_param_01_nested_01 %>" task_25_workflow_outputs: published_01: "<% env().env_param_01.env_param_01_nested_01 %>" published_02: "<% env().env_param_01.env_param_01_nested_01 %>" join: "all" on-success: - "task_16" - "task_24" - "task_25_2" task_25_2: action: "my_action" input: env: "<% env() %>" action_param_01: action_param_01_nested_02: xyz: "xyz" xyzn: "xyz" xyznm: "xyz" xyznmt: "<% $.task_25_workflow_outputs %>" action_param_01_nested_param: - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" - xyz_30: "xyz" xyznm: "xyz" xyznmtv: "<% env().env_param_01.env_param_01_nested_01 %>" on-success: - "task_23" mistral-6.0.0/rally-jobs/extra/scenarios/with_items/0000775000175100017510000000000013245513604022562 5ustar zuulzuul00000000000000mistral-6.0.0/rally-jobs/extra/scenarios/with_items/wb.yaml0000666000175100017510000000041313245513262024056 0ustar zuulzuul00000000000000--- version: '2.0' name: with_items_wb workflows: wf: input: - count: 10 - concurrency: 0 tasks: task1: with-items: i in <% range(0, $.count) %> action: std.echo output=<% $.i %> concurrency: <% $.concurrency %> mistral-6.0.0/rally-jobs/extra/scenarios/with_items/count_100_concurrency_10.json0000666000175100017510000000005013245513262030074 0ustar zuulzuul00000000000000{ "count": 100, "concurrency": 10 } mistral-6.0.0/run_functional_tests.sh0000777000175100017510000000204213245513262020026 0ustar zuulzuul00000000000000#! /usr/bin/env bash ARG=$1 function pre_hook() { export WITHOUT_AUTH="True" IS_TEMPEST=$(pip freeze | grep tempest) if [ -z "$IS_TEMPEST" ] then echo "$(tput setaf 4)No such module 'tempest' in the system. Before running this script please install 'tempest' module using : pip install git+http://github.com/openstack/tempest.git$(tput sgr 0)" exit 1 fi } function run_tests_by_version() { echo "$(tput setaf 4)Running integration API and workflow execution tests for v$1$(tput sgr 0)" export VERSION="v$1" nosetests -v mistral_tempest_tests/tests/api/v$1/ unset VERSION } function run_tests() { if [ -z "$ARG" ] then run_tests_by_version 1 run_tests_by_version 2 elif [ "$ARG" == "v1" ] then run_tests_by_version 1 elif [ "$ARG" == "v2" ] then run_tests_by_version 2 fi } function post_hook () { unset LOCAL_RUN } #----------main-part---------- echo "$(tput setaf 4)Preparation for tests running...$(tput sgr 0)" pre_hook echo "$(tput setaf 4)Running tests...$(tput sgr 0)" run_tests post_hook mistral-6.0.0/etc/0000775000175100017510000000000013245513604013772 5ustar zuulzuul00000000000000mistral-6.0.0/etc/policy.json0000666000175100017510000000000313245513261016156 0ustar zuulzuul00000000000000{} mistral-6.0.0/etc/logging.conf.sample.rotating0000666000175100017510000000117113245513261021376 0ustar zuulzuul00000000000000[loggers] keys=root [handlers] keys=consoleHandler, fileHandler [formatters] keys=verboseFormatter, simpleFormatter [logger_root] level=DEBUG handlers=consoleHandler, fileHandler [handler_consoleHandler] class=StreamHandler level=INFO formatter=simpleFormatter args=(sys.stdout,) [handler_fileHandler] class=logging.handlers.RotatingFileHandler level=INFO formatter=verboseFormatter args=("/var/log/mistral.log", "a", 10485760, 5) [formatter_verboseFormatter] format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s datefmt= [formatter_simpleFormatter] format=%(asctime)s %(levelname)s [-] %(message)s datefmt= mistral-6.0.0/etc/README.mistral.conf0000666000175100017510000000024713245513261017254 0ustar zuulzuul00000000000000The mistral.conf sample file is no longer generated and maintained in Trunk. To generate your own version of mistral.conf, use the following command: tox -egenconfigmistral-6.0.0/etc/logging.conf.sample0000666000175100017510000000111713245513261017550 0ustar zuulzuul00000000000000[loggers] keys=root [handlers] keys=consoleHandler, fileHandler [formatters] keys=verboseFormatter, simpleFormatter [logger_root] level=DEBUG handlers=consoleHandler, fileHandler [handler_consoleHandler] class=StreamHandler level=INFO formatter=simpleFormatter args=(sys.stdout,) [handler_fileHandler] class=FileHandler level=INFO formatter=verboseFormatter args=("/var/log/mistral.log",) [formatter_verboseFormatter] format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s datefmt= [formatter_simpleFormatter] format=%(asctime)s %(levelname)s [-] %(message)s datefmt= mistral-6.0.0/etc/wf_trace_logging.conf.sample.rotating0000666000175100017510000000254113245513261023252 0ustar zuulzuul00000000000000[loggers] keys=workflow_trace,profiler_trace,root [handlers] keys=consoleHandler, wfTraceFileHandler, profilerFileHandler, fileHandler [formatters] keys=wfFormatter, profilerFormatter, simpleFormatter, verboseFormatter [logger_workflow_trace] level=INFO handlers=consoleHandler, wfTraceFileHandler qualname=workflow_trace [logger_profiler_trace] level=INFO handlers=profilerFileHandler qualname=profiler_trace [logger_root] level=INFO handlers=fileHandler [handler_fileHandler] class=logging.handlers.RotatingFileHandler level=INFO formatter=verboseFormatter args=("/var/log/mistral.log", "a", 10485760, 5) [handler_consoleHandler] class=StreamHandler level=INFO formatter=simpleFormatter args=(sys.stdout,) [handler_wfTraceFileHandler] class=logging.handlers.RotatingFileHandler level=INFO formatter=wfFormatter args=("/var/log/mistral_wf_trace.log", "a", 10485760, 5) [handler_profilerFileHandler] class=logging.handlers.RotatingFileHandler level=INFO formatter=profilerFormatter args=("/var/log/mistral_osprofile.log", "a", 10485760, 5) [formatter_verboseFormatter] format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s datefmt= [formatter_simpleFormatter] format=%(asctime)s %(levelname)s [-] %(message)s datefmt= [formatter_wfFormatter] format=%(asctime)s WF [-] %(message)s datefmt= [formatter_profilerFormatter] format=%(message)s datefmt= mistral-6.0.0/etc/wf_trace_logging.conf.sample0000666000175100017510000000234313245513261021424 0ustar zuulzuul00000000000000[loggers] keys=workflow_trace,profiler_trace,root [handlers] keys=consoleHandler, wfTraceFileHandler, profilerFileHandler, fileHandler [formatters] keys=wfFormatter, profilerFormatter, simpleFormatter, verboseFormatter [logger_workflow_trace] level=INFO handlers=consoleHandler, wfTraceFileHandler qualname=workflow_trace [logger_profiler_trace] level=INFO handlers=profilerFileHandler qualname=profiler_trace [logger_root] level=INFO handlers=fileHandler [handler_fileHandler] class=FileHandler level=INFO formatter=verboseFormatter args=("/var/log/mistral.log",) [handler_consoleHandler] class=StreamHandler level=INFO formatter=simpleFormatter args=(sys.stdout,) [handler_wfTraceFileHandler] class=FileHandler level=INFO formatter=wfFormatter args=("/var/log/mistral_wf_trace.log",) [handler_profilerFileHandler] class=FileHandler level=INFO formatter=profilerFormatter args=("/var/log/mistral_osprofile.log",) [formatter_verboseFormatter] format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s datefmt= [formatter_simpleFormatter] format=%(asctime)s %(levelname)s [-] %(message)s datefmt= [formatter_wfFormatter] format=%(asctime)s WF [-] %(message)s datefmt= [formatter_profilerFormatter] format=%(message)s datefmt= mistral-6.0.0/etc/event_definitions.yml.sample0000666000175100017510000000027213245513261021513 0ustar zuulzuul00000000000000- event_types: - compute.instance.create.* properties: resource_id: <% $.payload.instance_id %> project_id: <% $.context.project_id %> user_id: <% $.context.user_id %> mistral-6.0.0/CONTRIBUTING.rst0000666000175100017510000000340713245513261015665 0ustar zuulzuul00000000000000======================= Contributing to Mistral ======================= If you're interested in contributing to the Mistral project, the following will help get you started. Contributor License Agreement ============================= In order to contribute to the Mistral project, you need to have signed OpenStack's contributor's agreement: * https://docs.openstack.org/infra/manual/developers.html * https://wiki.openstack.org/CLA Project Hosting Details ======================= * Bug trackers * General mistral tracker: https://launchpad.net/mistral * Python client tracker: https://launchpad.net/python-mistralclient * Mailing list (prefix subjects with ``[Mistral]`` for faster responses) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev * Documentation * https://docs.openstack.org/mistral/latest/ * IRC channel * #openstack-mistral at FreeNode * https://wiki.openstack.org/wiki/Mistral/Meetings_Meetings * Code Hosting * https://github.com/openstack/mistral * https://github.com/openstack/python-mistralclient * https://github.com/openstack/mistral-dashboard * https://github.com/openstack/mistral-lib * https://github.com/openstack/mistral-specs * https://github.com/openstack/mistral-specs * Code Review * https://review.openstack.org/#/q/mistral * https://review.openstack.org/#/q/python-mistralclient * https://review.openstack.org/#/q/mistral-dashboard * https://review.openstack.org/#/q/mistral-lib * https://review.openstack.org/#/q/mistral-extra * https://review.openstack.org/#/q/mistral-specs * https://docs.openstack.org/infra/manual/developers.html#development-workflow * Mistral Design Specifications * https://specs.openstack.org/openstack/mistral-specs/ mistral-6.0.0/PKG-INFO0000664000175100017510000002637113245513605014326 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: mistral Version: 6.0.0 Summary: Mistral Project Home-page: https://docs.openstack.org/mistral/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: Apache License, Version 2.0 Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/tc/badges/mistral.svg :target: https://governance.openstack.org/tc/reference/tags/index.html Mistral ======= Workflow Service for OpenStack cloud. This project aims to provide a mechanism to define tasks and workflows without writing code, manage and execute them in the cloud environment. Installation ~~~~~~~~~~~~ The following are the steps to install Mistral on debian-based systems. To install Mistral, you have to install the following prerequisites:: $ apt-get install python-dev python-setuptools libffi-dev \ libxslt1-dev libxml2-dev libyaml-dev libssl-dev **Mistral can be used without authentication at all or it can work with OpenStack.** In case of OpenStack, it works **only with Keystone v3**, make sure **Keystone v3** is installed. Install Mistral --------------- First of all, clone the repo and go to the repo directory:: $ git clone https://git.openstack.org/openstack/mistral.git $ cd mistral **Devstack installation** Information about how to install Mistral with devstack can be found `here `_. Configuring Mistral ~~~~~~~~~~~~~~~~~~~ Mistral configuration is needed for getting it work correctly with and without an OpenStack environment. #. Install and configure a database which can be *MySQL* or *PostgreSQL* (**SQLite can't be used in production.**). Here are the steps to connect Mistral to a *MySQL* database. * Make sure you have installed ``mysql-server`` package on your Mistral machine. * Install *MySQL driver* for python:: $ pip install mysql-python or, if you work in virtualenv, run:: $ tox -evenv -- pip install mysql-python NOTE: If you're using Python 3 then you need to install ``mysqlclient`` instead of ``mysql-python``. * Create the database and grant privileges:: $ mysql -u root -p mysql> CREATE DATABASE mistral; mysql> USE mistral mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'localhost' \ IDENTIFIED BY 'MISTRAL_DBPASS'; mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'%' IDENTIFIED BY 'MISTRAL_DBPASS'; #. Generate ``mistral.conf`` file:: $ oslo-config-generator --config-file tools/config/config-generator.mistral.conf \ --output-file etc/mistral.conf.sample #. Copy service configuration files:: $ sudo mkdir /etc/mistral $ sudo chown `whoami` /etc/mistral $ cp etc/event_definitions.yml.sample /etc/mistral/event_definitions.yml $ cp etc/logging.conf.sample /etc/mistral/logging.conf $ cp etc/policy.json /etc/mistral/policy.json $ cp etc/wf_trace_logging.conf.sample /etc/mistral/wf_trace_logging.conf $ cp etc/mistral.conf.sample /etc/mistral/mistral.conf #. Edit file ``/etc/mistral/mistral.conf`` according to your setup. Pay attention to the following sections and options:: [oslo_messaging_rabbit] rabbit_host = rabbit_userid = rabbit_password = [database] # Use the following line if *PostgreSQL* is used # connection = postgresql://:@localhost:5432/mistral connection = mysql://:@localhost:3306/mistral #. If you are not using OpenStack, add the following entry to the ``/etc/mistral/mistral.conf`` file and **skip the following steps**:: [pecan] auth_enable = False #. Provide valid keystone auth properties:: [keystone_authtoken] auth_uri = http://keystone-host:port/v3 auth_url = http://keystone-host:port auth_type = password username = password = user_domain_name = project_name = project_domain_name = #. Register Mistral service and Mistral endpoints on Keystone:: $ MISTRAL_URL="http://[host]:[port]/v2" $ openstack service create --name mistral workflowv2 $ openstack endpoint create mistral public $MISTRAL_URL $ openstack endpoint create mistral internal $MISTRAL_URL $ openstack endpoint create mistral admin $MISTRAL_URL #. Update the ``mistral/actions/openstack/mapping.json`` file which contains all available OpenStack actions, according to the specific client versions of OpenStack projects in your deployment. Please find more detailed information in the ``tools/get_action_list.py`` script. Before the First Run -------------------- After local installation you will find the commands ``mistral-server`` and ``mistral-db-manage`` available in your environment. The ``mistral-db-manage`` command can be used for migrating database schema versions. If Mistral is not installed in system then this script can be found at ``mistral/db/sqlalchemy/migration/cli.py``, it can be executed using Python command line. To update the database schema to the latest revision, type:: $ mistral-db-manage --config-file upgrade head To populate the database with standard actions and workflows, type:: $ mistral-db-manage --config-file populate For more detailed information about ``mistral-db-manage`` script please check file ``mistral/db/sqlalchemy/migration/alembic_migrations/README.md``. Running Mistral API server -------------------------- To run Mistral API server:: $ tox -evenv -- python mistral/cmd/launch.py --server api --config-file Running Mistral Engines ----------------------- To run Mistral Engine:: $ tox -evenv -- python mistral/cmd/launch.py --server engine --config-file Running Mistral Task Executors ------------------------------ To run Mistral Task Executor instance:: $ tox -evenv -- python mistral/cmd/launch.py --server executor --config-file Note that at least one Engine instance and one Executor instance should be running in order for workflow tasks to be processed by Mistral. If you want to run some tasks on specific executor, the *task affinity* feature can be used to send these tasks directly to a specific executor. You can edit the following property in your mistral configuration file for this purpose:: [executor] host = my_favorite_executor After changing this option, you will need to start (restart) the executor. Use the ``target`` property of a task to specify the executor:: ... Workflow YAML ... task1: ... target: my_favorite_executor ... Workflow YAML ... Running Multiple Mistral Servers Under the Same Process ------------------------------------------------------- To run more than one server (API, Engine, or Task Executor) on the same process:: $ tox -evenv -- python mistral/cmd/launch.py --server api,engine --config-file The value for the ``--server`` option can be a comma-delimited list. The valid options are ``all`` (which is the default if not specified) or any combination of ``api``, ``engine``, and ``executor``. It's important to note that the ``fake`` transport for the ``rpc_backend`` defined in the configuration file should only be used if ``all`` Mistral servers are launched on the same process. Otherwise, messages do not get delivered because the ``fake`` transport is using an in-process queue. Project Goals 2018 ------------------ #. **Complete Mistral documentation**. Mistral documentation should be more usable. It requires focused work to make it well structured, eliminate gaps in API/Mistral Workflow Language specifications, add more examples and tutorials. *Definition of done*: All capabilities are covered, all documentation topics are written using the same style and structure principles. The obvious sub-goal of this goal is to establish these principles. #. **Finish Mistral multi-node mode**. Mistral needs to be proven to work reliably in multi-node mode. In order to achieve it we need to make a number of engine, executor and RPC changes and configure a CI gate to run stress tests on multi-node Mistral. *Definition of done*: CI gate supports MySQL, all critically important functionality (join, with-items, parallel workflows, sequential workflows) is covered by tests. Project Resources ----------------- * `Mistral Official Documentation `_ * Project status, bugs, and blueprints are tracked on `Launchpad `_ * Additional resources are linked from the project `Wiki `_ page * Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0 Platform: UNKNOWN Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux mistral-6.0.0/ChangeLog0000664000175100017510000025174213245513601015001 0ustar zuulzuul00000000000000CHANGES ======= 6.0.0 ----- * Really make the cron trigger execution interval configurable * Consider size of output\_on\_error * Tags in workflows were not being properly checked * Make the cron trigger execution interval configurable * Adding Keycloak authorization support * Fix how a cron trigger starts a workflow * Fixes mistral-server --version command * More tests for running workflows based on existing * Remove achieved goals from the lis of annual goals * Fixing grammar mistake * Using oslo\_log instead of logging * Propagated a task timeout to a action execution * modify the import order * Fix docs to better reflect Jinja and YAQL usage * Remove the invalid toctree * Add claim\_messages and delete\_messages zaqar actions 6.0.0.0b3 --------- * Fix some reST field lists in docstrings * Updated from global requirements * Remove addition of a new task execution to task\_executions collection * Disable the wsme Sphinx extension from the API ref docs * Fix the 'params' field of the workflow execution REST resource * Running new workflow based on an existing execution * the word arguements should be arguments * Updated from global requirements * Migrate the jobs to native Zuul v3 format * TrivialFix: remove redundant import alias * Remove any old client actions that no longer exist * Fix break\_on calculation in before\_task\_start * Fix std.http action doc * task name can not be reserved keyword * Fixed integration of the unit tests with PosrgeSQL * Remove the redundant word * Added session.flush() before update\_on\_match() * Added the limit on selection of delayed calls * Modify error spelling word * change import order * fix syntax error the 'that' can not be ignore * Updated from global requirements * Allow ssh utils to use an absolute path * Updated from global requirements * Added the missing options (SCHEDULER\_GROUP and CRON\_TRIGGER\_GROUP) to a generating config * Fix the error url * Remove ceilometer actions from mistral * Remove call to sys.exc\_clear() in Python 3 * Make workflow execution creation idempotent * Add missing user/project name in action context * Gracefully handle DB disconnected connect errors * Readonly db transactions for testing * Remove intree mistral tempest plugin * Minor cosmetic changes * Updated from global requirements * Actually add the yaml\_dump expression * Add executions yaql filter * Disable unstable tempest test\_run\_ssh\_proxied\_action test * Updated from global requirements * Use mock for HTTP calls in unit tests * Updated from global requirements * Change log level for RestControllers * Remove the \_\_init\_\_ method from the test action * Fix inconsistencies when setting policy values * Use the new action context in MistralHTTPAction * Pass the new ActionContext to mistral-lib * Use the latest policy-json-file reference * Clear error info 6.0.0.0b2 --------- * Re-work the direct action call tempest test * Make more CI jobs voting * Fix race condition between task completion and child task processing * Updated from global requirements * Log a warning log message if the task isn't found * Fix swift endpoint * Disable unstable tempest test\_create\_action\_execution\_sync test * Disable unstable tempest multi\_vim\_authentication test * Avoid tox\_install.sh for constraints support * Add id field to db query if no sorting order is provided * Use a session for keystone auth * Add new tempest tests for swift and zaqar client actions * Updated from global requirements * Allow filtering executions by their root\_execution\_id * Implement policy in code - docs and reno (end) * Implement policy in code - event trigger (11) * Implement policy in code - workflow (10) * Implement policy in code - workbook (9) * Implement policy in code - service and task (8) * Implement policy in code - member (7) * Implement policy in code - execution (6) * Implement policy in code - environment (5) * Implement policy in code - cron trigger (4) * Implement policy in code - action (3) * Implement policy in code - action execution (2) * Implement policy in code (1) * Don't use oslo context get\_logging\_values * Wrong handling of is\_system flag at workbooks causes DB error with MySQL 5.7 * Switch zaqarclient and swiftclient to use a session * Stop passing auth\_token to ironic-inspector-client * Modify log infomation to achieve the same format * zuul: update tripleo zuul v3 jobs * Remove setting of version/release from releasenotes * Remove \_get\_task\_executions function * Updated from global requirements * Delete rows directly * Updated from global requirements * Fix yaql / json\_pp deprecation warning * Remove \_get\_event\_trigger function * Add a periodic job to check workflow execution integrity * Fix wf\_trace info adding useless space at some conditions * Remove \_get\_db\_object\_by\_name\_or\_id function * Use mock for HTTP calls in unit tests * Updated from global requirements * Fix sporadically overwriting of finished workflow execution state * Add retries to read-only db operations * Remove \_get\_wf\_object\_by\_name\_and\_namespace function * Get rid of ensure\_\* functions from db api * Add a json\_dump expression function * Re-raise DB errors when evaluating expressions * Updated from global requirements * Do not parse updated\_at for task if it was not updated * [API] Support get/delete cron triggers in any projects for admin * [API] Support project\_id filter in cron\_triggers API * Normalize sorting * 'all' parameter breaks task context * Zuul: add file extension to playbook path * Fix launcher tests * Drop pyflakes from the test requirements * Add a config option to disable cron triggers * Fix named locks implementation * Remove wrapping of database exceptions in \_get\_collection() * Replace or\_ with in\_ function for searching queries * Invoke AuthHook before ContextHook * Fix deletion of delayed calls * Add a yaml\_dump expression * Redundant alias in import statement 6.0.0.0b1 --------- * Add the Ironic wait\_for\_provision\_state action * Revert "Enable eventlet monkey patching for MySQLdb driver" * Optimize mistral queries for 'get\_task\_executions' * [Event-engine] Make listener pool name configurable * Updated from global requirements * Add yaml and json parsing functions * Decoupling of Mistral tempest test from Mistral code base * Make scheduler delay configurable * Optimize sending result to parent workflow * Added created\_at and updated\_at fields to functions task() and exection() * Allow mistral actions to run when authentication is not configured * Mistral fails on RabbitMQ restart * Enable eventlet monkey patching for MySQLdb driver * remove all common jobs * Add actions for the ironic virtual network interface commands * Add get cron-trigger by id support * Dynamic action name evaluation * Migrate Mistral jobs to Zuul v3 * Updated from global requirements * TrivialFix: Add doc/build directory in .gitignore * Update README with Keystone authtoken config * Replace @loopingcall.RetryDecorator with @tenacity.retry * Updated from global requirements * Removed NOT IN query from expiration policy * Use @db\_utils.retry\_on\_deadlock to retry scheduler transactions * Updated from global requirements * Add project\_id to API resources * Add README.mistral.conf doc in etc directory * TrivialFix: pretty format the json code block * Add root\_execution\_id to sub-workflow executions * Use get\_rpc\_transport instead of get\_transport * Updated from global requirements * Add mistral/tests/unit/expressions/\_\_init\_\_.py * Updated from global requirements * Cleanup test\_std\_http\_action * Fixes issue rendering strings containing multiple jinja expressions * Handle case with None encoding during std.http action execution * Clean up screen and tail\_log references * Using current pike stable release for devstack * Fix Kombu RPC threading and use within multiprocess environment * Fix "with-items" locking * Fix to use . to source script files * Updated from global requirements * Fix services launcher to handle shutdown properly * Catch DBEntityNotFoundError exceptions for invalid AdHoc Actions * Add "API server started." print statement for the API wsgi service * Adding doc8 to test-requirements * Updated from global requirements * Add ssl support for keycloak auth middleware * Process input defaults and output transforms for nested AdHoc Actions * Remove build files before run tox doc builder * Updated from global requirements * Dynamic workflow name evaluation * Fix cron keystone calls when token is available * Fix test for decoding utf8 * Update URL and indentations * import fails in python3 * support py3 when doing db\_sync * Update reno for stable/pike 5.0.0 ----- * Add doc8 rule and check doc/source files * [Triggers] Fix running openstack actions via triggers * TrivialFix: Fix typo * Cascade pause from pause-before in subworkflows * Cascade pause and resume to and from subworkflows * Move dsl\_v2 document to user guide * Updated from global requirements * [Trusts] Fix deleting trust * Fix event-triggers workflow namespace * Small typo fix * Fixed crontrigger execution error * Fix drop index in version 022 DB upgrade script * Allow async action execution to be paused and resumed * Set mistral-dashboard default branch to master * Create and run a workflow within a namespace * Allow to list all cron-triggers * Create and run a workflow within a namespace * Fix auth in actions if there is no auth\_uri in context * Use more specific asserts in tests * Add releasenote for public event triggers * Remove deprecation warning from tests * Add Glare action pack * TrivialFix: Fix typo * Use recommended function to setup auth middleware in devstack * Updated from global requirements * Updated from global requirements 5.0.0.0b3 --------- * Updated from global requirements * Fix the pep8 commands failed * Fix cron-triggers and openstack actions * Remove local Hacking for M318 * Add a hacking rule for string interpolation at logging * Admin guide landing page added * [Event-triggers] Allow public triggers * Make README better * Use 'related\_bug' decorator from stable interface * Unsupported 'message' Exception attribute in PY3 * Unsupported 'message' Exception attribute in PY3 * Update UTC time validate cron trigger * Fix some reST field lists in docstrings * Updated from global requirements * Cleanup docs to include params * Change the logo to lowercase * Replace e.message with str(e) * Change the misplaced index links * Chnage the mailing list URL * Remove note for nested ad-hoc actions * Updated from global requirements * Update and optimize documentation links * Replace test.attr with decorators.attr * Updated from global requirements * Handle empty response content during its decoding in std.http * Ignore linux swap files range * Updated from global requirements * Enable some off-by-default checks * Updated from global requirements * Update reference link to Ocata * Adding warning-is-error to doc building * Updated from global requirements * Remove the redundant default value * Tests: Remove the redundant method * Fixing deleting cron-trigger trusts * Fix get event triggers * Applying Pike document structure * Update the commands in README.rst * Fix tox * Improve keycloak auth module * Revert "Use recommended function to setup auth middleware in devstack" * Update .gitignore * Switch from oslosphinx to openstackdocstheme * Use recommended function to setup auth middleware in devstack * Update docker build * Add cron name/id to workflow execution description * Remove .testrepository directory from testenv * Updated from global requirements * Centralize session creation and authorization from OS clients * Updated from global requirements * Setup devstack with ini\_rpc\_backend * Replace the usage of 'admin\_manager' with 'os\_admin' * Use BoolOpt instead of StrOpt * Refactor mistral context using oslo\_context * Add more use of mistral-lib in mistral * Updated from global requirements * Add the baremetal\_introspection action for aborting * Updated from global requirements * Make sure that the field "state\_info" trimmed as expected * Set access\_policy for messaging's dispatcher * Increase the Environment variable column length * Updated from global requirements * Replace oslo.messaging.get\_transport with get\_notification\_transport * Change author in setup.cfg * Replace assertEqual([], items) with assertEmpty(items) * Optimize the link address * Always retrieve endpoint in mistral action * This is only a minor defect in README.rst 5.0.0.0b2 --------- * Update the contents of configuration guide * Minor nits to README * Added style enfore checks for assert statements * Make "triggered\_by" work in case of "join" tasks * Remove the deprecated configuration options * Stop using abbreviation DSL in document * Update python-neutronclient version * [Trusts] Fixing trusts deletion * Updated from global requirements * Remove 'sphinxcontrib.autohttp.flask' from sphinx config * Fixing indentation in docs * Updated from global requirements * Updated from global requirements * Fix doc generation for python 3 * Propagate "evaluate\_env" workflow parameter to subworkflows * [Regions] Fixing determining keystone for actions * Add one more test for task() function used in on-success * Add 'runtime\_context' to task execution REST resource * Add 'triggered\_by' into task execution runtime context * Refactor rest\_utils * Optimize API layer: using from\_db\_model() instead of from\_dict() * Get rid of ambiguity in region\_name * Update AdHoc Actions to support context data references * Adding mistral\_lib actions to mistral * Update Docker README * Updated from global requirements * Refactor db model methods * Updated from global requirements * Add release note for "action\_region" support * Adding log to db\_sync * Add "action\_region" param for OpenStack actions * Updated from global requirements * Release notes for "evaluate\_env" * Add 'evaluate\_env' workflow parameter * Add hide\_args=True to @profiler.trace() where it may cause problems * Remove unused logging import * Fix WSGI script for gunicorn * Revert "Support transition to keystone auth plugin" * Change service name to workflowv2 in docs * Support transition to keystone auth plugin * Fix a typo * Force Python 2 for pep8 linting * Add support for mistral-lib to Mistral * Updated from global requirements * Refactor Kombu-based RPC * Make rpc\_backend not engine specific * Add option to run actions locally on the engine * Don't save @property methods with other attributes * Fix the keystone auth url problem * Optimize the link address 5.0.0.0b1 --------- * Enable WSGI under Apache in devstack * Add "Project Goals 2017" to README.rst * Fix the doc for 'concurrency' policy * Add documentation for the engine commands * Optimizing lang schema validation * Advanced publishing: add 'global' function to access global variables * Advanced publishing: add publishing of global variables * Advanced publishing: change workflow lang schema * Fix serialization issue * Fix a description of 'executor\_thread\_pool\_size' option in Kombu RPC * Changed the README.rst and added debug guide * Updated from global requirements * Disable pbrs auto python-api generation * Set the basepython for the venv tox environment * Use Jinja2 sandbox environment * Limit the number of finished executions * Add Apache License Content in index.rst * Fix gate failure * Add release note for resource RBAC feature * Updated from global requirements * Rework the CLI Guide * Allow admin user to get workflow of other tenants * Role based resource access control - delete executions * Use the Mistral syntax highlighting on the dsl v2 page * Updated from global requirements * Replace six.iteritems() with .items() * Role based resource access control - update executions * Add sem-ver flag so pbr generates correct version * Remove the empty using\_yaql gude * Use the plain syntax highlighting in the webapi example * Remove the highlighting choice 'HTTP' * Add a Mistral lexer for pygments * Don't create actions when inspection fails * Change Http action result content encoding * Updated from global requirements * Role based resource access control - get executions * Remove unnecessary setUp function in testcase * Add check for idempotent id in tempest tests * Remove unnecessary tearDown function in testcase * Fix work of task() without task name within on-clause cases * Explicitly set charset to UTF-8 in rest\_utils for webob.Response * Updated from global requirements * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Surpress log with context data and db data * Add missing schema validation and unit tests for 'publish-on-error' * Add release note for 'created\_at' support in execution() * Add 'created\_at' to execution() yaql function * Change some 3rd party package default log levels * Remove log translations * Trim yaql/jinja operation log * Fix cinder/heat base import * Add missing swift actions * Use LOG.exception when adding an OpenStack action fails * Updated from global requirements * Add hacking for code style checks * Fix multi\_vim tempest test failure * Updated from global requirements * Add unit test for deleting workflows by admin * Improve database object access checking * Updated from global requirements * Log stack trace if action initialization faild * Updated from global requirements * Refactor methods in utils related to dicts * Refactor workflow/action input validation * Fully override default json values with user input * Add head\_object action mapping for swift * Updated from global requirements * Deleting the expired execution with batch size * Allow users to set the test run concurrency * Include the missing lines in the coverage report * Don't use 'master' as that isn't always true * [doc] Changed the output fields in quickstart guide * Improve the CONTRIBUTING.rst * Add \`coverage erase\` to the cover report * Fix update workflow by admin * Rename package 'workbook' to 'lang' * Fix get\_next\_execution\_time * Add idempotent\_id decorator to tempest testcases * Use utcnow() in expired executions policy test * Every unit test creates and registers every OpenStack action * Updated from global requirements * Add idempotent\_id decorator to tempest testcases * Verify the retry policy when passed in via variables * Reduce the number of with-items and retried in the concurrency test * Remove the delay from the direct workflow rerun tests * External OpenStack action mapping file support * Update docs for tasks function * Remove output from list action executions API * Update test requirement * Updated from global requirements * Correction in workflow state change handling * Update Dockerfile to use Xenial * Force Python 2 for documentation builds * Fix memory leak related to cached lookups * Fix for coverage job showing 0% coverage for kombu * Add Keycloak authentication doc for client side * Add details into docs about semantics of 'on-XXX' clauses * Add Keycloak authentication doc for server side * Refactor RPC serialization: add polymophic serializer * Updated from global requirements * Add reno for tasks function * Updated from global requirements * Remove '\_\_task\_execution' from task outbound context * Updated from global requirements * Revert "External OpenStack action mapping file support" * Prepare for using standard python tests * Fix for failing services on py3 with kombu driver * Remove support for py34 * External OpenStack action mapping file support * Remove wrong licensing * Refactor RPC serialization: remove JsonPayloadSerializer class * Update reno for stable/ocata 4.0.0.0rc1 ---------- * Fix for failing gates * Enforce style check for xrange() * Fix for failing services on py3 with kombu driver * Fix try import of openstack client modules * Remove some profiler traces, logs, use utils.cut() where needed * Remove unnecessary evaluation of outbound context * Optimizing utils.cut() for big dictionaries and lists * Fix doc build if git is absent 4.0.0.0b3 --------- * Updated from global requirements * Add support for Rabbit HA * Refactor rpc configuration loading * Updated from global requirements * Invalid jinja pattern regex corrected * Add script for unit test coverage job * Updated from global requirements * In Python 3.7 "async" and "await" will become reserved keywords * Allow hyphens in Workflow and ad-hoc action names * External OpenStack action mapping file support added * Make 'task' function work w/o a task name * using utcnow instead of now in expiration policy * Enforce style check for assertIsNone * Add action "std.test\_dict" * Register Javascript action additionally as 'js' action * Role based resource access control - update workflows * Remove insecure flag from the Baremetal Introspection client * Updated from global requirements * Make kombu driver work in multi-thread manner * Fix unit test that rely on order of return from DB * Use utcnow() instead of now() * Stop excpecting to have rabbit flags set when using fake driver * Updated from global requirements * Insecure flag added to openstack context * Initial commit for mistral api-ref * Removed unnecessary utf-8 encoding * Python3 common patterns * Updated from global requirements * Fix unit test that rely on order of return from DB * Fix for failing kombu dsvm gate * Move mock requirement to test-requirements.txt * Using sys.exit(main()) instead of main() * Use i18n for help text * Added gnocchi action pack * Add 'retry\_on\_deadlock' decorator * Fix two failing test cases in test\_tasks * Add the "has" DB filter * Use assertGreater() or assertLess() * Fix version response from root controller * Adding releasenotes for aodh action support * Updated from global requirements * Refactor 'stress\_test' to fit the current layout better * Add rally tests for 'join': 100 and 500 parallel tasks * Add a test for 'with-items' task: count=100, concurrency=10 * Add aodh actions to mistral * Disable invalid API test till it's fixed * Copy \_50\_mistral.py file from enabled folder * Fix doc for missing dashboard config file * Role based resource access control - get workflows * Make body of std.email optional * Refresh object state after lock acquisition in WithItemsTask * Small adjustments in WithItemsTask * Fix 'with-items' task completion condition * Apply locking to control 'with-items' concurrency * Slightly improve 'with-items' tests * Get rid of with\_items.py module in favor of WithItemsTask class * Refactor and improve 'with-items' algorithms * Fix docs in README.rst * Fix configuration generator * Fix version response from root controller * Exclude .tox folder from coverage report * Updated from global requirements * Add more tests to mistral rally * Replace six.iteritems() with .items() * Display all the possible server values 4.0.0.0b2 --------- * Correct missspellings of secret * Minor changes in the document * Added test cases for a few possible scenarios * change the cron-trigger execution time from localtime to UTC * Use the with keyword dealing with file objects * Modify the link in 'README.rst' * Modify the function "\_get\_spec\_version(spec\_dict)" * Update the wording in the actions terminology docs * Remove commented-out Apache 2 classifier from setup.cfg * Updated from global requirements * Fix for failing kombu gate * modify something in 'dsl\_v2.rst' * Fix two errors in YAML example and a error in action doc * Handling MistralException in default executor * Fix a syntax error in yaml example * std.email action requires a smtp\_password * Change version '1.0' to '2.0' * Add descriptions for on\_task\_state\_change parameters * Updated from global requirements * Added releasenote for retry policy update * Cleanup obvious issues in 'with-items' tests * Updated from global requirements * Allow "version" to be within workflow names in workbooks * Updated from global requirements * Yaql Tasks Function * Bump Ironic API version to 1.22 when creating the Ironic client * Small changes to docs to comply with openstack document style * Fix launch process of Mistral components * Modify import style in code * Some spelling errors * Initial commit for mistral-i18n support * Add timestamp at the bottom of every page * Show team and repo badges on README * Make CI gate for unit tests on mysql work * Fix the default configuration file path * Updated from global requirements * Mock the HTTP action in the with\_items tests * Fix devstack plugin compatibility * Updated the retries\_remain statement * Updated from global requirements * Add Ironic RAID actions * Revert "Remove unused scripts in tools" * Add a test for invalid task input expression * Fix config import in javascript action module * Make Jinja evaluator catch and wrap all underlying exceptions * Make YAQL evaluator catch and wrap all underlying exceptions * Replace 'assertFalse(a in b)' with 'assertNotIn(a, b)' * Updated from global requirements 4.0.0.0b1 --------- * Replace retrying with tenacity * Add cancelled state to action executions * Updated from global requirements * Fix possible DB race conditions in REST controller * Remove unused pylintrc * Added releasenote for Senlin Action Pack * Migrated to the new oslo.db enginefacade * Added senlin action pack * Few changes in the doc * Use mock for a bad HTTP call in unit tests * Few changes related to the doc blueprint * Fix REST API dangling transactions * Fix error message format in action handler * Fix error message format in other task handler methods * Migrate mistral task\_type * Fix error message format for task run and continue * Fix missing exception decorators in REST API * Remove unused scripts in tools * Replace uuid4() with generate\_uuid() from oslo\_utils * Updated from global requirements * Add type to tasks API * Handle region\_name in openstack actions * Add more tests to mistral rally * Replace oslo\_utils.timeutils.isotime * Adding Variables to Log Messages * Updated from global requirements * cors: update default configuration * Added unit tests for workflow executions and task executions filtering * Fix DB API transaction() * Run actions without Scheduer * Get correct inbound tasks context for retry policy * Updated from global requirements * Adding tests for workbook execution and execution list to Rally * Use service catalog from authentication response * Updated from global requirements * Fix a bug in the algo that determines if a route is possible * Enable DeprecationWarning in test environments * Added additional info in devstack/readme.rst * Fixing 'join' task completion logic * Updated from global requirements * Removal of unneccessary directory in run\_tests.sh * Get service catalog from token info * Add one more test for YAQL error message format * Change format of YAQL errors * Updated from global requirements * Update .coveragerc after the removal of openstack directory * Enable code coverage report in console output * Updated from global requirements * Remove logging import unused * Cleanup Newton Release Notes * Publish/output in case of task/workflow failure * Don't include openstack/common in flake8 exclude list * Fix PEP8 issue * Change task() function to return 'null' if task doesn't exist * Enable release notes translation * Updated from global requirements * Describe vital details for debugging in PyCharm * Update documentation for multi-vim support * Add Jinja evaluator * Minor changes to the documentation * Minor changes in the installation guides * Add a way to save action executions that run synchronously * Import haskey from keys module * Declare the encoding of file * Changes made to comply with OpenStack writing style * Cleanup the Quickstart Documentation * Stop adding ServiceAvailable group option * Updated from global requirements * Update heat actions in mapping.json * Updated from global requirements * Accept service catalog from client side * Using assertIsNone() instead of assertEqual(None, ...) * Updated from global requirements * Make deafult executor use async messaging when returning action results * Disable Client Caching * Updated from global requirements * Revert "Update UPPER\_CONSTRAINTS\_FILE for stable/newton" * Remove environment data from task inbound context * Use parenthesis to wrap strings over multiple lines * Updated from global requirements * Using sys.exit(main()) instead of main() * Do not include project name in the client cache key * Updated from global requirements * Add tests to check deletion of delayed calls on WF execution delete * Delete all necessary delayed calls on WF stop * Update UPPER\_CONSTRAINTS\_FILE for stable/newton * Fix for timeouting actions on run-action * Fix a typo in access\_control.py * Adding a script for fast mistralclient help generation * Make Javascript implementation configurable * Add unit test case for deletion of execution in case of (error and cancelled) * Avoid storing workflow input in task inbound context * Replace assertEqual(None, \*) with assertIsNone in tests * Updated from global requirements * Add \_\_ne\_\_ built-in function * Update reno for stable/newton * Remove context.spawn * Correct documentation about task attributes 'action' and 'workflow' * Updating mistralclient docs * Abstract authentication function * Fix for raising excepton from kombu 3.0.0.0rc1 ---------- * Remove workflow spec, input and params from workflow context * Add a smarter delay between workflow completion checks * Optimize the logic that check if 'join' task is allowed to start * Copy cached WF spec stored by definition id into WF execution cache * Add functional tests for event engine functions * Added unit tests for Workbook and Workflow filtering * Delete unnecessary comma * Fixed task in\_bound context when retrying * Enable changing of rpc driver from devstack * Take os\_actions\_endpoint\_type into use * Fix mistral API docs Fixing v2.rst to refer to new module paths, and adding the cron trigger param to POST v2/cron\_triggers/ documentation * Add event trigger REST API * Using count() instead of all() for getting incompleted tasks * Fix for raising exception directly to kombu * Updated from global requirements * Fix delayed calls DB migration * standardize release note page ordering * Fixed http links in CONRIBUTING.rst * Optimize finder functions for task executions * Change execution mechanism for 'join' tasks * Fixed an incorrect migration revision number in a comment * cast to str for allowable types * Raise NotImplementedError instead of NotImplemented * Optionally include the output when retrieving all executions * Add \_\_ne\_\_ built-in function * Fix getting URLs / and /v2 * Add event configuration for event trigger 3.0.0.0b3 --------- * Add 'uuid' YAQL function * Sync tools/tox\_install.sh * Updated from global requirements * Fix for 'Cannot authenticate without an auth\_url' * Add client caching for OpenStack actions * Add setuptools to requirements.txt * Task publish does not overwrite variable in context Edit * Updated from global requirements * Clean imports in code * TrivialFix: Remove logging import unused * Add a note to the documentation about std.fail * Some minor code optimization in post\_test\_hook.sh * Updated from global requirements * Fix for not working 'run-action' on kombu driver * Updated from global requirements * Fix documentation * Clean imports in code * Use more specific asserts in tests * Use upper constraints for all jobs in tox.ini * Updated from global requirements * Updated the configuration guide * Add a DB migration for named locks * Implement named transactional lock (semaphore) * Updated from global requirements * Closes-Bug: 1607348 * Optimize task defer() method * Optimize direct workflow controller * Updated from global requirements * Updated from global requirements * Fix task post completion scheduling * Fix \_possible\_route() method to account for not completed tasks * Add 'wait-before' policy test with two chained tasks * Fix task 'defer' * Filtering support for actions * Increase size of 'task\_executions\_v2.unique\_key' column * Add 'join after join' test * Slightly improve workflow trace logging * Fix workflow and join completion logic * Towards non-locking model: remove pessimistic locks * Fix specification caching mechanism * Towards non-locking model: make 'with-items' work w/o locks * Make mistral work with amqp and zmq backends * Towards non-locking model: adapt 'join' tasks to work w/o locks * Add unique keys for non locking model * Updated from global requirements * Fix GET /executions/ to init 'output' attribute explicitly * Fix past migration scripts discrepancies * fix for get action executions fails with "has no property 'type" * Updated Doc for SSL configuration * Use actual session for ironic-inspector action population * Added support for SSL connection in mistra-api server * Towards non-locking model: decouple WF completion check via scheduler * Towards non-locking model: use insert\_or\_ignore() for delayed calls * Towards non-locking model: add insert\_or\_ignore() on DB API * Fix the use of both adhoc actions and "with-items" in workflows * Towards non-locking model: removing env update from WF controller * Updated from global requirements * DB migration to three execution tables and increase some columns * Updated from global requirements * Add state info for synchronous actions run from CLI * Towards non-locking model: fix obvious workflow controller issues * Towards non-locking model: Add 'unique\_key' for delayed calls * Add \_get\_fake\_client to ironic-inspector actions * Add target parameters to REST API * Update docs and add release not for safe-rerun flag * Invalidate workflow spec cache on workflow definition updates * Removing unnecessary workflow specification parsing * Splitting executions into different tables * Added releasenote for https support * Add cancelled state to executions * Enable user to use transport\_url in kombu driver * Fixed trivial issue in exception message * Updated from global requirements * Fix DSLv2 example according to Mistral Neuton * Updated from global requirements * Use 'rpc\_response\_timeout' in kombu driver * Use Paginate query even if 'limit'or 'marker' is not set * Remove task result for collection REST requests * Allow to use both name and id to update action definitions * Remove some inconsistency in DB api * Get rid of oslo\_db warning about "id" not being in "sort\_keys" * Add event engine service * Error handling test: error in 'publish' for a task with 'on-error' * Added 'pip install -r requirements.txt' instruction * Executor fails actions if they are redelivered * Move the remainder of REST resources to resources.py * Move REST resources action, action execution and task to resources.py * Add the new endpoint /v2/tasks//workflow\_executions * Allow to use both name and id to access action definitions * Pass 'safe-rerun' param to RPC layer * Initialize RPC-related flag when starting API * Update in installation package list in installation guide * Add param 'safe-rerun' to task * Create MistralContext from rpc context in kombu engine * Add db models for event trigger * Updated from global requirements * Fix SPAG errors in Quickstart and Main Features docs * Fix some trivial SPAG errors in docs * Rename package mistral.engine.rpc to mistral.engine.rpc\_backend * Fixing filtering in task controller * Add Python 3.5 classifier and venv * Updated from global requirements 3.0.0.0b2 --------- * Fix for YaqlEvaluationException in std.create\_instance workflow * Updated from global requirements * Add tests for Kombu driver * Release note for KeyCloak OIDC support * Add KeyCloak OpenID Connect server-side authentication * Add authentication options for KeyCloak OIDC * Add proper handling for implicit task completion * Add proper error handling for task continuation * Add error handling tests: invalid workflow input, error in first task * Add more tests for error handling * Fix utility print\_executions method * Log warn openstack action generation failures * Fix Magnum action \_get\_fake\_class * Fix Murano action \_get\_fake\_class * Stylistic cleanups to lazy loading patch * Add configuration option for endpoint type * Add filters to all collections listing functions (tags included) * Lazy load client classes * Integrating new RPC layer with Mistral * Make RPC implementation configurable * Adding OsloRPC server and client * Add support for custom YAQL functions * Remove obsolete config option "use\_mistral\_rpc" * Add tacker actions in mistral * Update Expiration Policy Documentation * New RPC layer implementation * Don't create actions when attempting to update one that doesn't exist * Updated from global requirements * Add zake into dependencies * Add action context to all action executions * Fix SSHActionsTestsV2 failure * Updated mapping.json file * Support recursive ad-hoc action definitions * Updated from global requirements * Updated from global requirements * Updated from global requirements * Use client credentials to retrieve service list * Remove std.mistral\_http action from tests * Doc updated for oslo\_policy configuration * Updated from global requirements * Remove .mailmap file * Fix mysql driver installation section in readme * Fix API inconsistencies with GET /v2/workflows * Fixed fake clients of glance and designate * Fixed get\_actions\_list script to get glance actions * Fixed get\_actions\_list script to get designate actions * Example Mistral docker container broke due to oslo.policy update * Refactored tempest tests * Release note for magnum actions support * Fix postgresql test failure * Add configuration for Mistral tempest testing * Added doc string for enforce method * Release note for murano actions support * Add magnum certificates and mservices actions * Release note for role base access control * Added role base authentication support * Added murano actions * Add magnum bays actions * Enable osprofiler to measure performance * Rename the to\_string method to to\_json to clarify it's purpose * Support JSON data in JSON API type * Add Magnum actions * Updated from global requirements * Removing redundant wf\_ex\_id parameter for rerun across the code * Add explicit preconditions for methods of Action, Task and Workflow * Add a test that verifies an old bug with join * Refactoring workflow handler * Fix invalid type usage for join * mistral actions for designate v1 api's not working * Updated from global requirements * Remove AUTHORS file * Remove AUTHORS file from git tracking * Add missing argument in exception string * Updated from global requirements * Use LOG.exception when logging exceptions 3.0.0.0b1 --------- * Release notes for fail/pause/success transition message * Updated from global requirements * Fail/Success/Pause transition message * Remove unnecessary database transaction from Scheduler * Update .mailmap * Refactor Mistral Engine * Updated from global requirements * Updated from global requirements * Fixes the Mistral Docker image * Updated from global requirements * Return 'Unknown error' when error output is empty * Fix client in TroveActions * Add Python 3.4 to the classifiers * Remove unnecessary executable permissions * Updated from global requirements * Add baremetal.wait\_for\_finish action to mapping * Update get\_arg\_list\_as\_str to skip func params * Updated from global requirements * Enforcing upper constraints for tox test jobs * Fix get task list on YAQL error in with-items * Add API to validate ad-hoc action * Updated from global requirements * Updated from global requirements * Replace keystone CLI with openstack CLI * Add Designate apis as mistral actions * Remove oslo.messaging hack since it's broken with 5.0.0 version * Fix the yaql github repository * Updated from global requirements * Updated from global requirements * Fix mistral installation in devstack * Refactoring exception hierarchy * Updated from global requirements * Fixing engine facade hierarchy * Fixed issue related to docker image creation * Updated from global requirements * Rename base API test class * Disable cron trigger thread for API unit tests * Disabled ssl warnings while runing tempest tests * Add extra checks for the existance of executor\_callback * Updated from global requirements * Updated from global requirements * Added script to create docker image * Switch to auto-generated cron trigger names in unit tests * tempest: fix dir\_path * Leave more relevant comment in engine race condition test * Add utility methods to test action executions more conveniently * Fixing failing functional tests for Cinder and Heat actions * Update OpenStack actions mapping * Updated from global requirements * Unblock skipped test * Replace self.\_await(lamdba: ..) constructs with more readable calls * Add auth\_enabled=False to a cron trigger test * Updated from global requirements * Updated from global requirements * Updated from global requirements * Unblock skipped tests in test\_action\_defaults.py * Updated from global requirements * Fixing issue with different versions of oslo\_messaging * Getting rid of task result proxies in workflow context * Fix typos in Mistral files * Hacking log for warning * Fixing engine transaction model and error handling * Refactor workflow controller and fix a bug in \_fail\_workflow() * Fixing a bug in DB API method that acquires entity lock * Also package mistral\_tempest\_tests * module docs are not being generated * Update reno for stable/mitaka 2.0.0.0rc1 ---------- * Ack message after processing (oslo.messaging) * Run mistral services as separate processes * Fix compatibility with WebOb 1.6.0 * Reduce spec parsing some more * register the config generator default hook with the right name * Moved CORS middleware configuration into oslo-config-generator * Updated from global requirements * Deleting redundant trust creation in workbook uploading mechanism * Use tempest.lib instead of tempest-lib * Fix with-items task termination when sub-workflows fail * Restruct README file * Updated from global requirements * Updated from global requirements 2.0.0.0b3 --------- * Fix the problem when parse config file * Add asynchronous actions doc * Add release notes for M-3 * Updated from global requirements * Updated from global requirements * Fixed 'workflow\_name' key error * Change for synchronous Mistral actions from CLI * Updated from global requirements * Delete workflow members when deleting workflow * Add Mistral action pack * Release notes for Barbican actions * Updated from global requirements * Updated from global requirements * Add timestamp for member api response * Show shared workflows * Add actions to expose OpenStack Barbican APIs * Updated from global requirements * tempest-lib has been added to requirements.txt * Fix occasional test failure by SSHActions * Reduce spec parsing in workflow lifecycle * Support workflow id in execution operations * Add workflow id column to executions\_v2 table * Fix occasional test failure by assertListEqual * Added CORS support to Mistral * Fix spellings for two words * BaremetalIntrospectionAction get endpoint by service\_type * Implement basic Zaqar queue operations * Fix with-items concurrency for sub-workflows * Mistral tests will run from tempest plugin * Use proper way to initialize nova client * Updated from global requirements * Fix for not running 'on-success' task after task with 'with-items' * Fix quickstart doc error * Added link for pre-built docker image * Fix rerun of task in subworkflow * Fixed engine tests * Removed mistral/tests/functional * Updated from global requirements * Fix multiple reruns of with-items task * Remove argparse from requirements * Updated from global requirements * Add release note for tempest plugin 2.0.0.0b2 --------- * Add release note for swift action support * Add task\_execution\_id to workflow execution in API * Support workflow sharing API * Change LOG.warn to LOG.warning * Add db operations for resource members * Add db model for resource sharing * Remove unused logging import * Update REST API to support env update * Allow env update on resume and rerun workflows * Add support for OpenStack Swift actions * Disallow user to change workflow scope * Replace assertTrue(isinstance()) with assertIsInstance() * Updated from global requirements * Support workflow UUID when creating cron trigger * "test\_ssh\_actions" failed test has been fix * Fix db error when running python34 unit tests * Updated dynamic credential support for funtional test * Trival: Remove unused logging import * Drop py33 support * Release note for mistral-docker-image * Added README.rst file for tempest plugin * Added base.py to tempest plugin * Added engine to tempest plugin * Added test\_mistral\_basic\_v2.py to tempest plugin * Initial layout for mistral tempest plugin * Added mistral default actions * If task fails on timeout - there is no clear message of failure * devstack/plugin.sh: stop using deprecated option group for rabbit * Fix client name in setUpClass's method in 'test\_ssh\_actions' * Documentation for Mistral and Docker * Added Dockerfile to create docker image * Fix example for workbook in doc * Support UUID when deleting a workflow definition * Support UUID when updating a workflow definition * Support UUID when getting a workflow definition * Fix DB migration 009 constraint dropping * Add releatenote for fix execution saved in wrong tenant * Updated from global requirements * Workflow name can not be in the format of UUID * Fix join on branch error * Updated from global requirements * Get "cron trigger" list using model query * Add support for OpenStack Ironic Inspector actions * Updated from global requirements * Refactor action generator * Fix concurrency issues by using READ\_COMMITTED * Ignored PEP257 errors * Fix example for ad-hoc action in doc * Numerous debug messages due to iso8601 log level * Fixing execution saved in wrong tenant * Updated from global requirements * Pass environment variables of proxy to tox * Make test\_expiration\_policy\_for\_executions stable * Delete python bytecode before every test run * Fix state\_info details for with-items task error * Reset task state\_info on task re-run * Run pep8 on some tools python files * Remove version from setup.cfg 2.0.0.0b1 --------- * Add support for OpenStack Ironic actions * Fix tools/get\_action\_list.py * Update install\_venv.py so it says 'Mistral' * Add etc/mistral.conf.sample to .gitignore * Add database indices to improve query performance * Result will be [], if list for with-items is empty * Added Unit test when policy input is variable * Improve error message for YAQL task function * Add release notes for trove support * Add release notes for Cinder v2 support * Updated from global requirements * Force releasenotes warnings to be treated as errors * Send mail to mutli to\_addrs failed * Correct heatclient comment in mapping.json * Remove running of CLI tests on commit to mistral repo * Change installation of python-mistralclient in the gates * Fix database upgrade from a new database * Updated from global requirements * Fix task state for YAQL error in subflow output * Moved to cinderv2 client support * Show project id when retrieving workflow(s) * Updated from global requirements * Add the CONTRIBUTING.rst file * Fix with-items concurrency greater than the number of items * Adding releasenotes management to Mistral * Use setup\_develop instead of setup\_package in plugin.sh * Add Trove to mistral actions * Fix cron-trigger's execution with pattern and first time * Pass creds into the clients.Manager() in functional tests * Move base.py and config.py under unit/ folder * Add ceilometer action support * Increased size of "state\_info" column to 64kb * Skipped some tests in py3 environment * Fixing reference of floating\_ips\_client in tests * OpenStack typo * Updated from global requirements * Ensure only one WF execution for every CT cycle * Wrap sync\_db operations in transactions * Remove iso8601 dependency * Fix all H405 pep8 errors * Adding callback url to action context * Updated from global requirements * Remove kombu as a dependency for Mistral * Move the default directories into settings file * Removing wait() when initializing notification listener * Updated from global requirements * Do not use len() in log\_exec decorator * Fixing wf execution creation at initial stage * remove default=None for config options * Fixing workflow execution state calculation * Resolved encoding/decoding problem * Wrapper is used instead of direct json library * Comparision opeartor has been changed * Fixed some unit test issue * Filter is converted to list * Fix state change on exception during task state change * Updated from global requirements * Change in sort direction at execution controller * Avoid comparision between "None" type and "int" type * Division result is wrapped to int() * Updated from global requirements * devstack: add support for mistraldashboard * Fixing SSH actions to use names of private keys * [Docs] Add 'Cookbooks' page * Use oslo\_config new type PortOpt for port options * Add decode() function for string comparison * Refactored filter implementation * mistral-documentation: dashboard documentation regarding debug known issue * Fix mistral dsvm gate * Updated from global requirements * Adding 'json\_pp' function in YAQL * Added home-page value with mistral docs * filter() is wrapped around list() * Updated from global requirements * Updated from global requirements * Extracting generator objects returned by openstack actions * Set version for Mitaka * Updated from global requirements * Adding functional tests for SSH actions * Fixing "Task result / Data Flow" section of "Main Features" in docs * Fixing terminoloty/actions section in documentation * Fixing description of "mistral\_http" action in DSL spec * Adding section about validation into API v2 spec * Adding "Cron triggers" section into API v2 specification * Action definition updated, when workbook is created * Adding "Services" section into API v2 specification * Fixing small issues in documentation * Updated from global requirements * Creating new SSH action which uses gateway * Fixing ssh action to use private keys * Use MutableDict from sqlalchemy directly * Updated from global requirements * Delivering error message via header in pecan.abort * Replace copy.copy with copy.deepcopy * Updated from global requirements * Remove the transaction scope from task executions API * Colorise mistral log output * Updated from global requirements * Fix argparse error in wsgi script * Update AUTHORS file * mistral-documentation: dashboard documentation regarding debug * Fix more unit tests in py34 job * Fixing scheduler tests * Remove usage of expandtabs() in get\_workflow\_definition * Renaming state DELAYED to RUNNING\_DELAYED in doc 1.0.0.0rc1 ---------- * Renaming state DELAYED to RUNNING\_DELAYED * Support JSON and arrays in JavaScript action in Mistral * Fix some spelling typo in manual and program output * Fix order of arguments in assertEqual * Fix more tests in python34 gate * Using six.iteritems() to avoid some python3 tests failure * Fixing run action when error occurs * Fixing std.create\_instance workflow * Adding devstack installation doc * Fixing searching errors in mistral.exceptions * Check for trigger before delete wf * Change ignore-errors to ignore\_errors * Removing "skip" decorators for some OpenStack actions tests * Workflow definition updated, when workbook is created * Fail task on publish error * Raise correct exception when a service doesn't exist * Add semantics validation of direct workflow 'join' tasks * .mailmap for pbr AUTHORS upate * Fix two typos * Updated from global requirements * Adding validation of workflow graph * Mistral documentation: CLI operations * Adding 'is\_system' to definition model * Fixing uploading public workflow or action * Fixing DSL documentation * Initial commit that fix py34 tests run * Refactor get\_task\_spec using mechanism of polymorphic DSL entities * get\_action\_list: improve generated JSON output * get\_action\_list: use novaclient.client.Client * Adding test where with-items evaluates 'env' * Fixing indentation in 'create action' tutorial * Minor changes to Mistral docs * Customized response sent in case of fault condition * Fix docstring for the test of the std.email action * Fix order of arguments in assertEqual * Switch to devstack plugin * Updated from global requirements * Fix usage of python-novaclient in Mistral 1.0.0.0b3 --------- * Mistral docs: Upgrade database guide * Mistral terminology: cron-triggers and actions * Add YAQL function 'task' to extract info about task in DSL * Raising exception if there aren't start tasks in direct workflow * Mistral docs terminology: executions * The Link for plugin samples is added * Mistral documentation: mistralclient * Support action\_execution deletion * Use default devstack functional for Mistral user/service/endpoint creation * Fix timing in expired execution unit test * Fix execution update where state\_info is unset * Fix creation of Mistral service and endpoints * Removes unused posix-ipc requirement * Mistral documentation: architecture * Mistral documentation: Quickstart * Updated from global requirements * Small adjustements and fixes for execution expiration policy * Mistral docs terminology: workbooks and workflows * Fixing occasional fail of test\_create\_action\_execution * Adding project\_id to expiration-policy for executions ctx * Fixing 2 typos in comments * Mistral documentation: adding configuration guide * Refactor task controller with new json type * Refactor execution controller with new json type * Refactor environment controller with new json type * Refactor cron trigger controller with new json type * Refactor action execution controller and tests * Fixing working concurrency when value is YAQL * Add fields filter for workflow query * Mistral documentation: adding installation guide * Fix failure in execution pagination functioinal tests * Enabling direct workflow cycles: adding a parallel cycles test * Switching to six module where it's not used yet * Mistral documentation: dashboard installation guide * Mistral documentation: main features * Add resource params to reflect WSME 0.8 fixes * Add schema for additional properties of BaseListSpec * Enabling direct workflow cycles: adding another test * Enabling direct workflow cycles: fixing evaluation of final context * Add config example for rotating logs * Add pagination support for executions query API * Purge executions created during functional testing * Moving to YAQL 1.0 * Fixing cron trigger test * Update the gitingore file and tox.ini * Enabling direct workflow cycles: fixing find\_task\_execution() function * Enabling direct workflow cycles: adding a test that now doesn't pass * Add pagination support for actions query API * Add functional tests for workflow query * Fixed lack of context for triggers * Fixing working with-items and retry together * Implementing with-items concurrency * Add pagination support for workflows query API * Update AUTHORS * Raise user-friendly exception in case of connection failed * Scheduler in HA 1.0.0.0b2 --------- * Fix postgresql unit tests running * Add API for rerunning failed task execution * Remove mistral.conf.sample * Expiration policy for expired Executions * Add Service API * Add coordination feature to mistral service * Mistral documentation: Overview article * Mistral documentation: Initial commit * Complete action on async action invocation failure * Add processed field in task query response * Add one more tox env for running unit tests with postgresql * Add feature to rerun failed WF to the engine interface * Enable workflow to be resumable from errors * Fixing error result in run-action command * Fixing std.http action * Add coordination util for service management * Support large datasets for execution objects * Fixing execution state\_info * Fixing import error in sync\_db.py * Error result: fix std.http action * Error result: doc explaining error result in base action class * Error result: adding more tests * Making / and /v2 URLs allowed without auth * Error result: allow actions to return instance of wf\_utils.Result * Error result: adding a test for error result * Remove explicit requirements.txt occurrence from tox.ini * Remove H803, H305 * Fixing workflow behavior with non-existent task spec * Make update behavior consistent * Drop use of 'oslo' namespace package * Add guidance for updaing openstack action mappings * New mock release(1.1.0) broke unit/function tests * Fixing get task list API * Updating yaql version * Fix cron triggers * Fix mistralclient errors when reinstalling devstack * use task.spec to result always a list for with-items remove redundant 'if' Change-Id: Id656685c45856e628ded2686d1f44dac8aa491de Closes-Bug: #1468419 * Modify run\_tests.sh to support PostgreSQL * Add Mistral service and endpoint registration to README.rst * Fix inappropriate condition for retry policy * Fix invalid workflow completion in case of "join" * No input validation for action with kwargs argument * Delete one more tag 1.0.0.0b1 in devstack script pushed by mistake 1.0.0.0b1 --------- * Removing redundant header from setup.py * Simplifying a few data\_flow methods * Workflow variables: modifying engine so that variables work * Workflow variables: adding "vars" property in to workflow specification * Fixing devstack gate failure * Bug fix with-items tasks should always have result of list type * Set default log level of loopingcall module to 'INFO' * Implementing action\_execution POST API * Implementing 'start\_action' on engine side * Fix wrong zuul\_project name in mistral gate script * Creating action\_handler to separate action functionality * Get rid of openstack/common package * Improving devstack docs * Drop use of 'oslo' namespace package * Fix execution update description error * Fix the inappropriate database initialization in README.rst * Fix stackforge repo refs in devstack/lib/mistral * Fix wrong db connection string in README.rst file * Add description param to execution creation API * Update .gitreview file for project rename * Add description field to executions\_v2 table * Make use of graduated oslo.log module * Implementing 'continue-on' retry policy property * Adding some more constraints to cron trigger * Adding 'continue-on' to retry policy spec * Adding input validation to cron-trigger creation * Fixing execution-update API * Fixing sending the result of subworkflow * Fix command line arguments has lower priority than config file * Make mistral use of oslo-config-generator * Fixing mistral resources path * Fix devstack back to rabbit * Fixing devstack-gate failure * fix: Extra fields in the env definition are allowed * Allow pause-before to override wait-before policy * Adjust docs API to last changes * Fixing YAQL related errors * Skip test on heat action * Removing incorrect 2015.\* tags for client in devstack script * Adding migrations README * Fix dsvm gate failure * Fixing YAQL len() function in Mistral * Adding 'workflow\_params' to cron triggers * Allowing a single string value for "requires" clause * Adding "requires" to "task-defaults" clause * Updating requirements to master * Updating mapping.json to master * Fix bug with action class attributes * Setting base version in setup.cfg for libery cycle * Fix error when getting workflow with default input value * Fix wrong log content * Retry policy one line syntax * Fixing yaql version * Fix yaql error caused by the ply dependency * Fixing action\_executions API * Adding script for retrieving OpenStack action list * Adding tests on 'break-on' of retry policy * Update mapping.json for OpenStack actions * Allowing strings in on-success/on-error/on-complete clauses * Consider input default values in ad-hoc action * Change novaclient import to v2 2015.1.0rc1 ----------- * Add action execution ID to action context * Consider input default values in workflow execution * Removing "policies" keyword from resources * Getting rid of "policies" keyword * Rolling back YAQL to v0.2.4 * Fixing result ordering in 'with-items' * Fixing tags of wf as part of wb * Fixing variable names in db/v2/sqlalchemy/api.py * Fix a logging issue in ssh\_utils * Pin oslo pip requirements * Add YAQL parsing to DSL validation * Fixing engine concurrent issues * Apply input schema to workflow/action input * Add schema for workflow input with default value support * Remove transport from WSGI script * Fixing API 500 errors on Engine side * Fix typo in wf\_v2.yaml * Moving to YAQL 1.0 * Get rid of v1 in installation scripts * Fixing exception type that workbook negative tests expect * Removing v1 related entries from setup.cfg * Renaming "engine1" to "engine" * Fixing DB errors in transport * Removing left v1 related stuff (resources, DSL specs) * Add workbook and workflow validation endpoints * Deleting all v1 related stuff * Fixing docs on target task property in README * Rename 'wf\_db' to 'wf\_def' to keep consistency * Provide 'output' in action\_execution API correctly * Small data\_flow refactoring, added TODOs to think about design * Fixing version info in server title * Fixing 'with-items' with plain input * Add 'keep-result' property to task-spec * Add implicit task access in workflow * Fixing work 'with-items' on empty list * Expanding generators when evaluating yaql expressions * Add mistral-db-manage script * Small refactoring in engine, task handler and workflow utils * Fixing big type column for output and in\_context * Harden v2 DSL schema for validation * Fix bug with redundant task\_id in part of logs * Fixing 'with-items' functionality * Fixing task API (published vars) * Support subclass iteration for Workflow controller 2015.1.0b3 ---------- * Fixing tasks API endpoint * Add action\_execution API * Fixing pause-before policy * Fixing timeout policy * Implementing 'acquire\_lock' method and fixing workflow completion * Fix retry policy * Fixing wait-after policy * Fixing wait-before policy * Trigger remaining-executions and first-exec-date * Refactor task output: full engine redesign * Fix DSL schema in test workbook * Fixing scheduler work * Small refactoring in test\_javascript * Add WSGI script for API server * Fix list of upstream tasks for task with no join * Fixing finishing workflow in case DELAYED task state * Adding validation in policies * Refactor task output: DB API methods for action executions * Refactor task output: 'db\_tasks'->'task\_execs', 'db\_execs'->'wf\_execs' * Refactoring task output: 'task\_db' -> 'task\_ex', 'exec\_db' -> 'wf\_ex' * Refactoring task output: full redesign of DB models * Adding string() YAQL function registered at Mistral level * Fixing published vars for parallel tasks (and join) * Limit WorkflowExecution.state\_info size * Fixing YAQL in policies * Default workflow type to 'direct' * Fix wrong log task changing state * Fix mismatch to new YAQL syntax * Adjust standard actions and workflows * Changing YAQL syntax delimeters * Remove eventlet monkey patch in mistral \_\_init\_\_ * Refactoring task output: renaming DB models for better consistency * Fix OS action client initialization * Expose stop\_workflow in API * Add simple integration tests for OpenStack actions * Fix formatting endpoint urls in OS actions * Fixing a bug in logging logic and small refactoring * Refactoring task output: renaming 'output' to 'result' for task * Refactoring task output: adding ActionInvocation model * Task specification improvement(Part 2) * Add support for auth against keystone on https * Support ssl cert verification on outgoing https * Make spec object more readable in logging * Fix test\_nova\_actions after changes in tempest * Task specification improvement * Renaming \_find\_completed\_tasks to \_find\_successful\_tasks * Adding more tests for parallel tasks publishing * Fixing bug with context publishing of parallel tasks * Fix keystone actions * Fix tempest gate, add tempest import to our script * Fix the wrong project name in run\_tests.sh usage * Track execution and task IDs in WF trace log * Changing InlineYAQLEvaluator: treat only {yaql} as YAQL * Fix H904 pep8 error * Refactoring inline parameters syntax * Add Rally jobs related files to Mistral 2015.1.0b2 ---------- * JavaScript action: part 2 * Allowing multiple hosts for ssh action * Catch workflow errors * Rename environment to env in start\_workflow * Fix action\_context in with\_items * Fix sequential tasks publishing the same variable * fix doc dsl v2 * JavaScript action: part 1 * Apply default to action inputs from environment * Add full support for YAQL expressions * Fixing a data flow bug with parallel tasks * Changing publishing mechanism to allow referencing context variables * Fix 500 error on wrong definition * Pass action error to results * Fixing problem with trust creation * Working on secure DB access (part 4) * Working on secure DB access (part 3) * Working on secure DB access (part 2) * Working on secure DB access (part 1) * Concurrency: part 2 * Adding assertions for "updated\_at" field in DB tests * Fix imports due to changes in tempest * Fixing environment tests * Concurrency: part 1 * Change 'with-items' syntax * Add validation on 'with-items' input * Adding test on calculating multi-array input * Adding more tests for YAQL length() function * Implement workflow execution environment - part 3 * Implement workflow execution environment - part 2 * Implement workflow execution environment - part 1 * Small: remove polluting debug log * Updating YAQL dependency to version 0.2.4 * Update README file with devstack installation instruction * Small: refactor commands * Fix mistralclient initialization * Small fixes in default config * small tox fixes * Using 'with-items' instead of 'for-each' * Fixing README * Implementing "no-op" task * Updating SQLAlchemy dependency 2015.1.0b1 ---------- * Refacor resume algorithm * Implement pause-before * Fixing parsing inline syntax parameters * Fix retry policy unit test * Fixing a bug retry policy * Updates logging configuration samples * Changing target task property to singular form * Add region name to OpenStack client initialization * Fixing for-each * API controllers should log requests at INFO level * Add test case for dataflow to test action input * Refactor for-each * Style changes in launch.py * Testing wait policies defined in "task-defaults" for reverse worklfow * Testing timeout policy defined in "task-defaults" for reverse workflow * Testing retry policy defined in "task-defaults" for reverse workflow * Redesigning engine to move all remote calls from transactions * Working on "join": making "one" join value work (discriminator) * Working on "join": allowed value "one" for "join" property * Add docs on task-affinity and configuring MySQL * Working on "join": removing array type from "join" JSON schema * Working on "join": making "join" trigger only once * Working on "join": adding a test to verify that join triggers once * Working on "join": fixing "partial join" test new "noop" engine command * Working on "join": implementing partial join with numeric cardinality * Modified install docs * Fix creating triggers with the same pattern, wf and wf-input * Working on "join": added a test for numbered partial join * Refactor policies tests * Working on "join": making "full join" work with conditional transitions * Working on "join": making "full join" work with incoming errors * Adding "std.fail" action that always throws ActionException * Adding "std.noop" action (can be useful for testing) * Raise human-readable exception if workflow\_name is not a dict * Working on "join": first basic implementation of full join * Working on "join": add "join" property into task specification * Working on "join": implement basic test for full join * Fix trace with wrong input for action * Fix Application context not found in tests * Add advanced tests on workflow-resume * Make able to resume workflow * Refactor API tests for v2 * Fix creating std actions * Renaming trusts.py to security.py and adding method add\_security\_info * Refactoring workbooks service to be symmetric with other services * Use YAML text instead of JSON in HTTP body * Renaming "commands" to "cmds" in engine to avoid name conflicts * Refactor std.email action * Update README files * Sort executions and tasks by time * Add 'project\_id' to Execution and Task * Fill 'wf\_name' task\_db field * Add cinder actions * Add possibility to pass variables from context to for-each * Implement for-each task property * Updating AUTHORS file * Refactoring getting one object from DB * Fix creating objects with the same names * Add API integration tests for actions 0.1.1 ----- * Construct and pass action\_context to action * Add passing auth info to std.http * Adding print out of server information into launch script * Adding method for authentication based on config keystone properties * Add functional API tests for cron-triggers * Docs fix - small structure fix * Add documentation - part 3 * Add validating of 'for-each' task property DSL * Cut too long task result in log * Cleanup, refactoring and logging * Fixing condition in workflow service * Adding endpoint for cron triggers * Refactoring workflow and action services * Implementing cron cron trigger v2 * Adding DB model and DB api methods for cron triggers * Provide workflow input via API * Add for-each to task spec * Now collections in the DB sorted by name * Create standard workflows and actions * Fixing order of commands and tasks in direct workflow * Fix task-defaults correct work * Removing saving raw action/workflow result under 'task.taskName' * Making YAQL function length() work for generators * Fixing a bug in inline expressions * Adding length() YAQL function * Whitelist binary 'rm' in tox.ini * Add adding target via YAQL * Add simple task affinity feature * Fixing dsl v2 unit test * Refactoring action service * Use keystonemiddleware in place of keystoneclient * Add generating parameters for openstack-actions * Provide action-input via API * Fix dataflow work * Add documentation - part 2 * Add documentation - part 1 * Update tearDown methods in API integration tests * Use $(COMMAND) instead of \`COMMAND\` * Making execution context immutable * Add workflow trace logging in engine v2 * Fix scheduler test * Fix providing 'is\_system' property in /actions * Fix tasks in order of execution * Stop using intersphinx * Style changes in Scheduler and its tests * Add script to run functional tests locally * Adding 'tags' to action rest resource * Modifying workflow and action services to save 'tags' * Adding 'tags' to workflow and action specs * Cleaning up obsolete TODOs and minor style changes * Update requirements due to global requirements (master) * Fix API tests for v2 version 0.1 --- * Style changes in policies and commands * Fix race conditions in policies * Fix workbook and workflow models * Implementing policies in task-defaults property * Add timeout policy * Implementing 'task-defaults' workflow property * Cosmetic changes in actions service * Making action controller able to handler multiple actions * Making workflow endpoint able to upload multiple workflows * Fixing v2 workbooks controller not to deal with 'name' * Modifying workbook service to infer name and tags from definition * Adding 'name' to reverse\_workflow.yaml workbook * Add workflow service module * Fix providing result (task-update API) * Add param 'name' to the test definition * Adding 'name' and 'tags' into workbook spec * Cosmetic changes in executions v2 controller and tests * Removing obsolete code related to ad-hoc actions * Renaming 'parameters' to 'input' everywhere * Cosmetic changes in Data Flow and commands * Fix passing params to execution * Fix dataflow work * Adding workflow parameters validation * Removing engine redundant parameter * Adding tests for order of engine instructions * Fixing db properties for testing purposes * Add API integration tests for v2 * Trivial: improve ad-hoc action test * Fix input on execution create * Fixing task/workflow specs to do transformations with 'on-XXX' once * Fixing v2 specs so 'on-XXX' clauses return lists instead of dicts * Improving exceptions for OpenStack actions * Getting rid of explicit 'start-task' property in workflow DSL * Implementing workflow 'on-task-XXX' clauses * Fix wrong passing parameter 'workflow\_input' * Fixing workflow specification to support 'on-task-XXX' clauses * Fixing workflow handlers to return all possible commands * Refactoring engine using abstraction of command * Delete explicit raising DBError from transaction * Fixing passing raw\_result in v1 * Register v2 API on keystone by default * Renaming 'stop\_workflow' to 'pause\_workflow' * Adding unit for tests engine instructions * Fixing task v2 specification * Fix run workflow in case task state == ERROR * Fixed retry-policy optional 'break-on' property * Fix workflow update API * Add mechanism for generation action parameters * Implement short syntax for passing base-parameters into adhoc-action * Changing all DSL keywords to lower case * Additional testing of reverse workflow * Pass output from task API to convey\_task\_result * Moving all API tests under 'mistral.tests.unit' package * Fixing workbook definition upload for v1 * Add check on config file in sync\_db script * Fixed Execution WSME model and to\_dict() * Saving description from definition in actions endpoint * Fixing workflows controller to fill 'spec' property based on definition * Adding actions endpoint * Cosmetic changes * Fixing engine to support adhoc actions * Fixing workbook service to create actions * Implement wait-after policy and retry * Add test on passing expressions to parameters * Fixed Engine v2 work on fake RPC backend * Add posibillity to use different types in task parameters * Adding necessary DB API methods for actions * Creating ad-hoc actions engine test * Removing obsolete namespace related methods from task v2 spec * Fixing subworkflow resolution algorithm * Removing 'workflow\_parameters' from workflow spec * Switching to using 'with db\_api.transaction()' * Removing redundant parameters from methods of policies * Add 'description' field to specifications * Add serializers to scheduler call * Implement Wait-before policy * Refactoring engine to build and call task policies * Provide executor info about action * Create action\_factory without access to DB * Delete code related to Namespaces * Change instruction how to start Mistral * Dividing get\_action\_class on two separate methods * Rename action\_factory to action\_manager * Modify action\_factory to store actions in DB * Work toward Python 3.4 support and testing * Renaming 'on-finish' to 'on-complete' in task spec * Adding "wait-before" and "wait-after" to task policies * Fixing workflow spec to return start task spec instead its name * Including "policies" into task spec * Adjusting policy interfaces * Renaming 'workflow\_parameters' to 'workflow-parameters' * Small optimizations and fixes * Fixing processing subworkflow result * Renaming 'class' to 'base' in action spec * Renaming 'start\_task' to 'start-task' in workflow spec * Fix execution state ERROR if task\_spec has on-finish * Additional changes in Delayed calls * Fixing services/workbooks.py to use create\_or\_update\_workflow() * Implement REST API v2.0 * Adding new methods to DB API v2 (load\_xxx and create\_or\_update\_xxx) * Adding unit tests for workflow DB model * Add service for delayed calls * Improving services/workbooks * Removing obsolete db.api.py module in favor of db.v1.api.py * Introducing 'workflow' as an individual entity * Removing 'Namespaces' section from DSL * Renaming 'linear' workflow to 'direct' * Implementing task execution infrastructure * Add two more tests which check workflow execution * Small updates to devstack integration * Adding transaction context manager function for db transactions * Fail workflow if any task fails * Fixing validation for action specifications ('output' property) * Working on linear workflow: on\_task\_result() * Working on linear workflow: start\_workflow() * Working on engine implementation: on\_task\_result() * Renaming base class for Mistral DB models * Working on engine implementation: start\_workflow() * Fix small issues in tests * Cosmetic changes in integration tests * Rename resource directory * Add integration test on Glance Action * Add test on Keystone Action * Add integration tests on nova actions * Add tests which check task dependencies * Move gate tests under mistral/tests * Add neutron actions * Small fixes in openstack-actions * Moving TaskResult and states to 'workflow' package * Adding implementation of method \_\_repr\_\_ for DB models * Working on reverse workflow: on\_task\_result() * Working on reverse workflow: implementing method start\_workflow() * Replacing NotImplemented with NotImplementedError * Working on reverse workflow: fixing specification version injection * Unit tests for v2 DB model * Refactoring DB access layer * Implementing DSL specification v2 (partially) * Add heat actions * Add openstack actions * Switching from dicts to regular objects in DB API * Initial commit for the new engine * Fix mistral gate job * Replace oslo-incubator's db with standalone oslo.db * Move oslotest into test-requirements.txt * Calculate context for tasks with dependencies * Cleaning up index.rst file * The schedule triggers need to setup admin context before run * Add running mistralclient integrational tests * Make executor able to work in isolated environment * Add installation of mistralclient in devstack script * Make plugins easier to use * Update requirements due to global-requirements * Fixing Mistral HTTP action to take care of empty headers * Log action failures and exceptions 0.0.4 ----- * Fixing wrong access to Mistral security context in engine * Make OpenStack related data available in actions * Add project\_id to the workbook and filter by it * Make sure the context is correctly passed through the rpc * Add Executions and Tasks root API endpoints * Removing obsolete folder "scripts" * Remove redundant convey\_task\_results arguments * Remove redundant DB API arguments * 'requires' should take a string or list * Fix get task list of nonexistent execution * Favor addCleanup() over tearDown() * Make sure the api tests get a valid context * Fix Hacking rule H306 and H307 * Fix hacking rule H236 * Fix Hacking rule H302 (import only modules) * Expose Task's output and parameters through API * Make the service\_type more consistent * Switch from unittest2 to oslotest(testtools) * Fix hacking rules H101 and E265 * Temporarily disable the new hacking rules * Renaming all example config files from \*.conf.example to \*.conf.sample * Fixing obsolete file name in README.rst * Fix devstack gate * Add upload definition action in test * Do a better job of quietening the logs * All tests should call the base class setUp() * Move all tests to use base.BaseTest * Add OS\_LOG\_CAPTURE to testr.conf * Fix create execution when workbook does not exist * Fix getting action\_spec in create tasks * Added information about automated tests * Refactor test\_task\_retry to not rely on start\_task * Clean up configuration settings * Refactor test\_engine to not rely on start\_task * Fix update nonexistent task * Fix get execution list when workbook does not exist * Fix keystone config group for trust creation * fix mistral devstack scripts * Fix bug with getting nonexistent task * Fix duplicate keystone auth\_token config options * Move tests to testr * Add negative functional tests * Add new tests for executions and tasks * Add lockutils to openstack/common * Implement new mistral tests * Remove unneccesary oslo modules * Making "Namespaces" section truly optional * Restore script update\_env\_deps in tools * Fix devstack integration scripts * Remove unused function get\_state\_by\_http\_status\_code * Sync code with oslo-incubator * Small engine bugfixing/refactoring * Make field 'Namespaces' optional * Add support for plugin actions * Add autogenerated API documentation * Adding docstring for HTTPAction class * Renaming 'events' to 'triggers' * Adding more http standard action parameters * Fix H404 multi line docstring should start without a leading new line * Fix H233 Python 3.x incompatible use of print operator * Fix pep H301 one import per line * Fix pep H231 Python 3.x incompatible 'except x,y:' construct * Fix pep H402 one line docstring needs punctuation * Fix pep H201 no 'except:' at least use 'except Exception:' * Fix pep E226 missing whitespace around arithmetic operator * Add hacking to the flake8 tests * Add/Fix all error handling mechanism on REST API * Fix url in "get workbook definition" test * Cleanup exceptions and add http code * Throwing an error when workbook validation fails * Throw NotFoundException when object not found * Fix creating trust on workbook creation * Allow launch script to start any combination of servers * Fix 500 status code on DELETE request * Fix issue with tempest tests * Task.publish now is processed as arbitrary structure * Fix demo.yaml example in tempest tests * Add test on arbitrary output dict in action * Fix mistral tests * Update mistral tests * Context contains results of all previous tasks now 0.0.2 ----- * Fixed issue with tarballs 0.0.1 ----- * Refactor engine to use plugins * Fixes list of requirements * Fixes README.rst formatting * Making workflow trace logging more consistent * Added Devstack integration * Fixing setup.cfg * Fix work on MySQL backend * Replace rabbit config to 'default' section * Additional workflow trace logging in abstract\_engine.py * Fixing wrong comparison in retry.py * Engine as a standalone process * Improved README file * Fix evaluating task parameters * Adding all conf files in etc/ to .gitignore * Fix broken retry tests * Remove etc/logging.conf * Add workflow logging * Fixing inline expressions evaluation * Making execution data available in data flow context * Fixing initialization of variable 'action\_spec' in abstract\_engine.py * Remove redundant update task operation * Fix convert params and result in AdHocAction * Adding parameters to adhoc action namespaces * Removing 'base\_output' from ad-hoc actions specification * Temporarily commenting assertions in task retry tests * Temporarily commenting assertions in task retry tests * Fix result of HTTP action * Fix returning ERROR task state * Fixing http action and abstract engine * Moving expressions.py out of package 'engine' * Change repeater to retry on error * BP mistral-actions-design (raw action spec -> ActionSpec) * BP mistral-actions-design (removing old actions, addressing previous comments) * BP mistral-actions-design (add SSH action) * BP mistral-actions-design (switch to new design) * BP mistral-actions-design (ad-hoc actions in factory) * BP mistral-actions-design (ad-hoc action) * BP mistral-actions-design (Mistral HTTP action) * BP mistral-actions-design (action creation) * BP mistral-actions-design (add new actions package) * BP mistral-actions-design * Add SSH Action * Remove local engine * Fix repeatable task scheduling * Add resolving inline expressions * Cosmetic change * Fixed issue with deprecated exception * Fix minor issues * Fix keystone trust client * Add extracting action output * Refactor the local engine to use an in process executor * Implements: blueprint mistral-std-repeat-action * Correct fake action test name * Remove unneeded declarations in unit tests * Add keystone auth\_token in context * Fix keystone config group name * Add script to allow update dependencies in all envs * Fixing ordering bugs in local engine tests * Fixing ordering bugs in workbook model and tests * Fixing executor launch script * Fix getting task on-\* properties * Rename 'events' to 'triggers' * Implement new object-model specification * Use oslo.messaging for AMQP communications * Working on Data Flow (step 5) * Working on Data Flow (step 4) * Working on Data Flow (step 3) * Make engine configurable, make debugger show local variables * Partially fixed the pylint errors * Fix throwing exception when 'output' block is not defined * Fixed critical pylint warnings * Working on Data Flow (step 2) * Working on Data Flow (step 1) * Add scheduling specific task on sucess/error * Send email action, part 2 * Rename "target\_task" to "task" * Send email action, step 1 * Add negative tests to api * Fixing access to task "parameters" property in DSL * Fix getting task on-\* properties in DSL * Fix task keys properties in DSL parser * Add YAQL expression evaluation * Modified Rest action for process 'input' property * Add sync task execution * Fixing and refactoring authentication * Deleting client and demo app from main Mistral repo * Fixed issue with tarballs * Add integration tests * Divide RestAction on two separated actions in DSL * Add new access methods in DSL parser * Refactoring local and scalable engines * Adding Data Flow related code to REST API * Fixing typo in an exception message in Mistral client * Step 2 refactoring Mistral engines * Step 1 refactoring Mistral engines * Fix exceptions output when creating object in DB * Add SQLAlchemy in requirements.txt * Fix DB API import in scheduler.py * Implement single (non-scalable) engine * Fixing scheduler transactions * Fixing workbook events creation * Fixing flak8 excludes in tox.ini * Adjusting all license headers in python files so they look the same * Adding license and authors file * context creation in periodic task to execute workbook * Fix workbook POST duplicate exception * Add demo app for Mistral * Fix client for further patching * Added trust for workbook runs * Updating README.md file * Fixing scalable engine algorithm * Fixing scripts' headers to make them executable * Various fixes related to end-to-end testing * Fix resolving dependencies in workflow * Add explicit DB transaction management * Added context for application * Added keystone token authorization * Fix periodic tasks running over engine * Add engine related features * Implementing scalable Mistral Engine * Add DSL parser * Implementing Mistral Rest API Client * Add SQLAlchemy models and access methods * Connect DB implementation with DB interface * Added periodic events * Working on REST API * Added initial database setup * Adding REST API application skeleton based on pecan/wsme * Adding pecan, wsme, oslo and adjusting packages * Modify use case example * Fixing licence in setyp.py * Add example of using taskflow * Add .gitreview, setup.py and other infrastructure * Adding .gitignore * Adding virtual environment tools * Adjusting project name in readme file * Initial commit mistral-6.0.0/test-requirements.txt0000666000175100017510000000221613245513262017463 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 croniter>=0.3.4 # MIT License doc8>=0.6.0 # Apache-2.0 eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT fixtures>=3.0.0 # Apache-2.0/BSD keystonemiddleware>=4.17.0 # Apache-2.0 mistral-lib>=0.3.0 # Apache-2.0 mock>=2.0.0 # BSD networkx<2.0,>=1.10 # BSD nose>=1.3.7 # LGPL oslotest>=3.2.0 # Apache-2.0 oslo.db>=4.27.0 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 osprofiler>=1.4.0 # Apache-2.0 os-api-ref>=1.4.0 # Apache-2.0 pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD reno>=2.5.0 # Apache-2.0 requests-mock>=1.1.0 # Apache-2.0 sphinx!=1.6.6,>=1.6.2 # BSD sphinxcontrib-httpdomain>=1.3.0 # BSD sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0 openstackdocstheme>=1.18.1 # Apache-2.0 tooz>=1.58.0 # Apache-2.0 tempest>=17.1.0 # Apache-2.0 testrepository>=0.0.18 # Apache-2.0/BSD testtools>=2.2.0 # MIT unittest2>=1.1.0 # BSD WSME>=0.8.0 # MIT mistral-6.0.0/requirements.txt0000666000175100017510000000460313245513272016511 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. alembic>=0.8.10 # MIT aodhclient>=0.9.0 # Apache-2.0 Babel!=2.4.0,>=2.3.4 # BSD croniter>=0.3.4 # MIT License cachetools>=2.0.0 # MIT License eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT gnocchiclient>=3.3.1 # Apache-2.0 Jinja2!=2.9.0,!=2.9.1,!=2.9.2,!=2.9.3,!=2.9.4,>=2.8 # BSD License (3 clause) jsonschema<3.0.0,>=2.6.0 # MIT keystonemiddleware>=4.17.0 # Apache-2.0 mistral-lib>=0.3.0 # Apache-2.0 networkx<2.0,>=1.10 # BSD oslo.concurrency>=3.25.0 # Apache-2.0 oslo.config>=5.1.0 # Apache-2.0 oslo.context>=2.19.2 # Apache-2.0 oslo.db>=4.27.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 osprofiler>=1.4.0 # Apache-2.0 paramiko>=2.0.0 # LGPLv2.1+ pbr!=2.1.0,>=2.0.0 # Apache-2.0 pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD python-barbicanclient!=4.5.0,!=4.5.1,>=4.0.0 # Apache-2.0 python-cinderclient>=3.3.0 # Apache-2.0 python-designateclient>=2.7.0 # Apache-2.0 python-glanceclient>=2.8.0 # Apache-2.0 python-glareclient>=0.3.0 # Apache-2.0 python-heatclient>=1.10.0 # Apache-2.0 python-keystoneclient>=3.8.0 # Apache-2.0 python-mistralclient>=3.1.0 # Apache-2.0 python-magnumclient>=2.1.0 # Apache-2.0 python-muranoclient>=0.8.2 # Apache-2.0 python-neutronclient>=6.3.0 # Apache-2.0 python-novaclient>=9.1.0 # Apache-2.0 python-senlinclient>=1.1.0 # Apache-2.0 python-swiftclient>=3.2.0 # Apache-2.0 python-tackerclient>=0.8.0 # Apache-2.0 python-troveclient>=2.2.0 # Apache-2.0 python-ironicclient>=2.2.0 # Apache-2.0 python-ironic-inspector-client>=1.5.0 # Apache-2.0 python-zaqarclient>=1.0.0 # Apache-2.0 PyJWT>=1.0.1 # MIT PyYAML>=3.10 # MIT requests>=2.14.2 # Apache-2.0 tenacity>=3.2.1 # Apache-2.0 setuptools!=24.0.0,!=34.0.0,!=34.0.1,!=34.0.2,!=34.0.3,!=34.1.0,!=34.1.1,!=34.2.0,!=34.3.0,!=34.3.1,!=34.3.2,!=36.2.0,>=16.0 # PSF/ZPL six>=1.10.0 # MIT SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT stevedore>=1.20.0 # Apache-2.0 WSME>=0.8.0 # MIT yaql>=1.1.3 # Apache 2.0 License tooz>=1.58.0 # Apache-2.0 zake>=0.1.6 # Apache-2.0 mistral-6.0.0/setup.py0000666000175100017510000000200613245513262014731 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) mistral-6.0.0/docker_image_build.sh0000777000175100017510000000054713245513261017355 0ustar zuulzuul00000000000000#!/bin/bash -xe # TODO (akovi): This script is needed practically only for the CI builds. # Should be moved to some other place # install docker curl -fsSL https://get.docker.com/ | sh sudo service docker restart sudo -E docker pull ubuntu:14.04 # build image sudo -E tools/docker/build.sh sudo -E docker save mistral-all | gzip > mistral-docker.tar.gz mistral-6.0.0/mistral/0000775000175100017510000000000013245513604014672 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/actions/0000775000175100017510000000000013245513604016332 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/actions/openstack/0000775000175100017510000000000013245513604020321 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/actions/openstack/mapping.json0000666000175100017510000017554413245513261022670 0ustar zuulzuul00000000000000{ "_comment": "Mapping OpenStack action namespaces to all its actions. Each action name is mapped to python-client method name in this namespace.", "nova": { "_comment": "It uses novaclient.v2.", "agents_convert_into_with_meta": "agents.convert_into_with_meta", "agents_create": "agents.create", "agents_delete": "agents.delete", "agents_find": "agents.find", "agents_findall": "agents.findall", "agents_list": "agents.list", "agents_update": "agents.update", "aggregates_add_host": "aggregates.add_host", "aggregates_convert_into_with_meta": "aggregates.convert_into_with_meta", "aggregates_create": "aggregates.create", "aggregates_delete": "aggregates.delete", "aggregates_find": "aggregates.find", "aggregates_findall": "aggregates.findall", "aggregates_get": "aggregates.get", "aggregates_get_details": "aggregates.get_details", "aggregates_list": "aggregates.list", "aggregates_remove_host": "aggregates.remove_host", "aggregates_set_metadata": "aggregates.set_metadata", "aggregates_update": "aggregates.update", "availability_zones_convert_into_with_meta": "availability_zones.convert_into_with_meta", "availability_zones_find": "availability_zones.find", "availability_zones_findall": "availability_zones.findall", "availability_zones_list": "availability_zones.list", "certs_convert_into_with_meta": "certs.convert_into_with_meta", "certs_create": "certs.create", "certs_get": "certs.get", "cloudpipe_convert_into_with_meta": "cloudpipe.convert_into_with_meta", "cloudpipe_create": "cloudpipe.create", "cloudpipe_find": "cloudpipe.find", "cloudpipe_findall": "cloudpipe.findall", "cloudpipe_list": "cloudpipe.list", "cloudpipe_update": "cloudpipe.update", "flavor_access_add_tenant_access": "flavor_access.add_tenant_access", "flavor_access_convert_into_with_meta": "flavor_access.convert_into_with_meta", "flavor_access_find": "flavor_access.find", "flavor_access_findall": "flavor_access.findall", "flavor_access_list": "flavor_access.list", "flavor_access_remove_tenant_access": "flavor_access.remove_tenant_access", "flavors_convert_into_with_meta": "flavors.convert_into_with_meta", "flavors_create": "flavors.create", "flavors_delete": "flavors.delete", "flavors_find": "flavors.find", "flavors_findall": "flavors.findall", "flavors_get": "flavors.get", "flavors_list": "flavors.list", "hosts_convert_into_with_meta": "hosts.convert_into_with_meta", "hosts_find": "hosts.find", "hosts_findall": "hosts.findall", "hosts_get": "hosts.get", "hosts_host_action": "hosts.host_action", "hosts_list": "hosts.list", "hosts_list_all": "hosts.list_all", "hosts_update": "hosts.update", "hypervisor_stats_convert_into_with_meta": "hypervisor_stats.convert_into_with_meta", "hypervisor_stats_statistics": "hypervisor_stats.statistics", "hypervisors_convert_into_with_meta": "hypervisors.convert_into_with_meta", "hypervisors_find": "hypervisors.find", "hypervisors_findall": "hypervisors.findall", "hypervisors_get": "hypervisors.get", "hypervisors_list": "hypervisors.list", "hypervisors_search": "hypervisors.search", "hypervisors_statistics": "hypervisors.statistics", "hypervisors_uptime": "hypervisors.uptime", "glance_find_image": "glance.find_image", "glance_list": "glance.list", "keypairs_convert_into_with_meta": "keypairs.convert_into_with_meta", "keypairs_create": "keypairs.create", "keypairs_delete": "keypairs.delete", "keypairs_find": "keypairs.find", "keypairs_findall": "keypairs.findall", "keypairs_get": "keypairs.get", "keypairs_list": "keypairs.list", "limits_convert_into_with_meta": "limits.convert_into_with_meta", "limits_get": "limits.get", "neutron_find_network": "neutron.find_network", "quota_classes_convert_into_with_meta": "quota_classes.convert_into_with_meta", "quota_classes_get": "quota_classes.get", "quota_classes_update": "quota_classes.update", "quotas_convert_into_with_meta": "quotas.convert_into_with_meta", "quotas_defaults": "quotas.defaults", "quotas_delete": "quotas.delete", "quotas_get": "quotas.get", "quotas_update": "quotas.update", "server_groups_convert_into_with_meta": "server_groups.convert_into_with_meta", "server_groups_create": "server_groups.create", "server_groups_delete": "server_groups.delete", "server_groups_find": "server_groups.find", "server_groups_findall": "server_groups.findall", "server_groups_get": "server_groups.get", "server_groups_list": "server_groups.list", "server_migrations_convert_into_with_meta": "server_migrations.convert_into_with_meta", "server_migrations_find": "server_migrations.find", "server_migrations_findall": "server_migrations.findall", "server_migrations_get": "server_migrations.get", "server_migrations_list": "server_migrations.list", "server_migrations_live_migrate_force_complete": "server_migrations.live_migrate_force_complete", "server_migrations_live_migration_abort": "server_migrations.live_migration_abort", "servers_add_fixed_ip": "servers.add_fixed_ip", "servers_add_floating_ip": "servers.add_floating_ip", "servers_add_security_group": "servers.add_security_group", "servers_backup": "servers.backup", "servers_change_password": "servers.change_password", "servers_clear_password": "servers.clear_password", "servers_confirm_resize": "servers.confirm_resize", "servers_convert_into_with_meta": "servers.convert_into_with_meta", "servers_create": "servers.create", "servers_create_image": "servers.create_image", "servers_delete": "servers.delete", "servers_delete_meta": "servers.delete_meta", "servers_diagnostics": "servers.diagnostics", "servers_evacuate": "servers.evacuate", "servers_find": "servers.find", "servers_findall": "servers.findall", "servers_force_delete": "servers.force_delete", "servers_get": "servers.get", "servers_get_console_output": "servers.get_console_output", "servers_get_mks_console": "servers.get_mks_console", "servers_get_password": "servers.get_password", "servers_get_rdp_console": "servers.get_rdp_console", "servers_get_serial_console": "servers.get_serial_console", "servers_get_spice_console": "servers.get_spice_console", "servers_get_vnc_console": "servers.get_vnc_console", "servers_interface_attach": "servers.interface_attach", "servers_interface_detach": "servers.interface_detach", "servers_interface_list": "servers.interface_list", "servers_ips": "servers.ips", "servers_list": "servers.list", "servers_list_security_group": "servers.list_security_group", "servers_live_migrate": "servers.live_migrate", "servers_lock": "servers.lock", "servers_migrate": "servers.migrate", "servers_pause": "servers.pause", "servers_reboot": "servers.reboot", "servers_rebuild": "servers.rebuild", "servers_remove_fixed_ip": "servers.remove_fixed_ip", "servers_remove_floating_ip": "servers.remove_floating_ip", "servers_remove_security_group": "servers.remove_security_group", "servers_rescue": "servers.rescue", "servers_reset_network": "servers.reset_network", "servers_reset_state": "servers.reset_state", "servers_resize": "servers.resize", "servers_restore": "servers.restore", "servers_resume": "servers.resume", "servers_revert_resize": "servers.revert_resize", "servers_set_meta": "servers.set_meta", "servers_set_meta_item": "servers.set_meta_item", "servers_shelve": "servers.shelve", "servers_shelve_offload": "servers.shelve_offload", "servers_start": "servers.start", "servers_stop": "servers.stop", "servers_suspend": "servers.suspend", "servers_trigger_crash_dump": "servers.trigger_crash_dump", "servers_unlock": "servers.unlock", "servers_unpause": "servers.unpause", "servers_unrescue": "servers.unrescue", "servers_unshelve": "servers.unshelve", "servers_update": "servers.update", "services_convert_into_with_meta": "services.convert_into_with_meta", "services_delete": "services.delete", "services_disable": "services.disable", "services_disable_log_reason": "services.disable_log_reason", "services_enable": "services.enable", "services_find": "services.find", "services_findall": "services.findall", "services_force_down": "services.force_down", "services_list": "services.list", "usage_convert_into_with_meta": "usage.convert_into_with_meta", "usage_find": "usage.find", "usage_findall": "usage.findall", "usage_get": "usage.get", "usage_list": "usage.list", "versions_convert_into_with_meta": "versions.convert_into_with_meta", "versions_find": "versions.find", "versions_findall": "versions.findall", "versions_get_current": "versions.get_current", "versions_list": "versions.list", "virtual_interfaces_convert_into_with_meta": "virtual_interfaces.convert_into_with_meta", "virtual_interfaces_find": "virtual_interfaces.find", "virtual_interfaces_findall": "virtual_interfaces.findall", "virtual_interfaces_list": "virtual_interfaces.list", "volumes_convert_into_with_meta": "volumes.convert_into_with_meta", "volumes_create_server_volume": "volumes.create_server_volume", "volumes_delete_server_volume": "volumes.delete_server_volume", "volumes_get_server_volume": "volumes.get_server_volume", "volumes_get_server_volumes": "volumes.get_server_volumes", "volumes_update_server_volume": "volumes.update_server_volume" }, "glance": { "_comment": "It uses glanceclient.v2.", "image_members_create": "image_members.create", "image_members_delete": "image_members.delete", "image_members_list": "image_members.list", "image_members_update": "image_members.update", "image_tags_delete": "image_tags.delete", "image_tags_update": "image_tags.update", "images_add_location": "images.add_location", "images_create": "images.create", "images_data": "images.data", "images_deactivate": "images.deactivate", "images_delete": "images.delete", "images_delete_locations": "images.delete_locations", "images_get": "images.get", "images_list": "images.list", "images_reactivate": "images.reactivate", "images_update": "images.update", "images_update_location": "images.update_location", "images_upload": "images.upload", "schemas_get": "schemas.get", "tasks_create": "tasks.create", "tasks_get": "tasks.get", "tasks_list": "tasks.list", "metadefs_resource_type_associate": "metadefs_resource_type.associate", "metadefs_resource_type_deassociate": "metadefs_resource_type.deassociate", "metadefs_resource_type_get": "metadefs_resource_type.get", "metadefs_resource_type_list": "metadefs_resource_type.list", "metadefs_property_create": "metadefs_property.create", "metadefs_property_delete": "metadefs_property.delete", "metadefs_property_delete_all": "metadefs_property.delete_all", "metadefs_property_get": "metadefs_property.get", "metadefs_property_list": "metadefs_property.list", "metadefs_property_update": "metadefs_property.update", "metadefs_object_create": "metadefs_object.create", "metadefs_object_delete": "metadefs_object.delete", "metadefs_object_delete_all": "metadefs_object.delete_all", "metadefs_object_get": "metadefs_object.get", "metadefs_object_list": "metadefs_object.list", "metadefs_object_update": "metadefs_object.update", "metadefs_tag_create": "metadefs_tag.create", "metadefs_tag_create_multiple": "metadefs_tag.create_multiple", "metadefs_tag_delete": "metadefs_tag.delete", "metadefs_tag_delete_all": "metadefs_tag.delete_all", "metadefs_tag_get": "metadefs_tag.get", "metadefs_tag_list": "metadefs_tag.list", "metadefs_tag_update": "metadefs_tag.update", "metadefs_namespace_create": "metadefs_namespace.create", "metadefs_namespace_delete": "metadefs_namespace.delete", "metadefs_namespace_get": "metadefs_namespace.get", "metadefs_namespace_list": "metadefs_namespace.list", "metadefs_namespace_update": "metadefs_namespace.update", "versions_list": "versions.list" }, "keystone": { "_comment": "It uses keystoneclient.v3.", "credentials_create": "credentials.create", "credentials_delete": "credentials.delete", "credentials_find": "credentials.find", "credentials_get": "credentials.get", "credentials_list": "credentials.list", "credentials_update": "credentials.update", "domains_create": "domains.create", "domains_delete": "domains.delete", "domains_find": "domains.find", "domains_get": "domains.get", "domains_list": "domains.list", "domains_update": "domains.update", "endpoint_filter_add_endpoint_to_project": "endpoint_filter.add_endpoint_to_project", "endpoint_filter_check_endpoint_in_project": "endpoint_filter.check_endpoint_in_project", "endpoint_filter_delete_endpoint_from_project": "endpoint_filter.delete_endpoint_from_project", "endpoint_filter_list_endpoints_for_project": "endpoint_filter.list_endpoints_for_project", "endpoint_filter_list_projects_for_endpoint": "endpoint_filter.list_projects_for_endpoint", "endpoint_policy_check_policy_association_for_endpoint": "endpoint_policy.check_policy_association_for_endpoint", "endpoint_policy_check_policy_association_for_region_and_service": "endpoint_policy.check_policy_association_for_region_and_service", "endpoint_policy_check_policy_association_for_service": "endpoint_policy.check_policy_association_for_service", "endpoint_policy_create_policy_association_for_endpoint": "endpoint_policy.create_policy_association_for_endpoint", "endpoint_policy_create_policy_association_for_region_and_service": "endpoint_policy.create_policy_association_for_region_and_service", "endpoint_policy_create_policy_association_for_service": "endpoint_policy.create_policy_association_for_service", "endpoint_policy_delete_policy_association_for_endpoint": "endpoint_policy.delete_policy_association_for_endpoint", "endpoint_policy_delete_policy_association_for_region_and_service": "endpoint_policy.delete_policy_association_for_region_and_service", "endpoint_policy_delete_policy_association_for_service": "endpoint_policy.delete_policy_association_for_service", "endpoint_policy_get_policy_for_endpoint": "endpoint_policy.get_policy_for_endpoint", "endpoint_policy_list_endpoints_for_policy": "endpoint_policy.list_endpoints_for_policy", "endpoints_create": "endpoints.create", "endpoints_delete": "endpoints.delete", "endpoints_find": "endpoints.find", "endpoints_get": "endpoints.get", "endpoints_list": "endpoints.list", "endpoints_update": "endpoints.update", "groups_create": "groups.create", "groups_delete": "groups.delete", "groups_find": "groups.find", "groups_get": "groups.get", "groups_list": "groups.list", "groups_update": "groups.update", "oauth1.consumers_build_url": "oauth1.consumers.build_url", "oauth1.consumers_create": "oauth1.consumers.create", "oauth1.consumers_delete": "oauth1.consumers.delete", "oauth1.consumers_find": "oauth1.consumers.find", "oauth1.consumers_get": "oauth1.consumers.get", "oauth1.consumers_list": "oauth1.consumers.list", "oauth1.consumers_put": "oauth1.consumers.put", "oauth1.consumers_update": "oauth1.consumers.update", "oauth1.request_tokens_authorize": "oauth1.request_tokens.authorize", "oauth1.request_tokens_build_url": "oauth1.request_tokens.build_url", "oauth1.request_tokens_create": "oauth1.request_tokens.create", "oauth1.request_tokens_delete": "oauth1.request_tokens.delete", "oauth1.request_tokens_find": "oauth1.request_tokens.find", "oauth1.request_tokens_get": "oauth1.request_tokens.get", "oauth1.request_tokens_list": "oauth1.request_tokens.list", "oauth1.request_tokens_put": "oauth1.request_tokens.put", "oauth1.request_tokens_update": "oauth1.request_tokens.update", "oauth1.access_tokens_build_url": "oauth1.access_tokens.build_url", "oauth1.access_tokens_create": "oauth1.access_tokens.create", "oauth1.access_tokens_delete": "oauth1.access_tokens.delete", "oauth1.access_tokens_find": "oauth1.access_tokens.find", "oauth1.access_tokens_get": "oauth1.access_tokens.get", "oauth1.access_tokens_list": "oauth1.access_tokens.list", "oauth1.access_tokens_put": "oauth1.access_tokens.put", "oauth1.access_tokens_update": "oauth1.access_tokens.update", "policies_create": "policies.create", "policies_delete": "policies.delete", "policies_find": "policies.find", "policies_get": "policies.get", "policies_list": "policies.list", "policies_update": "policies.update", "projects_create": "projects.create", "projects_delete": "projects.delete", "projects_find": "projects.find", "projects_get": "projects.get", "projects_list": "projects.list", "projects_update": "projects.update", "regions_create": "regions.create", "regions_delete": "regions.delete", "regions_find": "regions.find", "regions_get": "regions.get", "regions_list": "regions.list", "regions_update": "regions.update", "role_assignments_create": "role_assignments.create", "role_assignments_delete": "role_assignments.delete", "role_assignments_find": "role_assignments.find", "role_assignments_get": "role_assignments.get", "role_assignments_list": "role_assignments.list", "role_assignments_update": "role_assignments.update", "roles_check": "roles.check", "roles_create": "roles.create", "roles_delete": "roles.delete", "roles_find": "roles.find", "roles_get": "roles.get", "roles_grant": "roles.grant", "roles_list": "roles.list", "roles_revoke": "roles.revoke", "roles_update": "roles.update", "services_create": "services.create", "services_delete": "services.delete", "services_find": "services.find", "services_get": "services.get", "services_list": "services.list", "services_update": "services.update", "trusts_create": "trusts.create", "trusts_delete": "trusts.delete", "trusts_find": "trusts.find", "trusts_get": "trusts.get", "trusts_list": "trusts.list", "trusts_update": "trusts.update", "users_add_to_group": "users.add_to_group", "users_check_in_group": "users.check_in_group", "users_create": "users.create", "users_delete": "users.delete", "users_find": "users.find", "users_get": "users.get", "users_list": "users.list", "users_remove_from_group": "users.remove_from_group", "users_update": "users.update", "users_update_password": "users.update_password" }, "heat": { "_comment": "It uses heatclient.v1.", "actions_cancel_update": "actions.cancel_update", "actions_check": "actions.check", "actions_resume": "actions.resume", "actions_suspend": "actions.suspend", "build_info_build_info": "build_info.build_info", "events_get": "events.get", "events_list": "events.list", "resource_types_generate_template": "resource_types.generate_template", "resource_types_get": "resource_types.get", "resource_types_list": "resource_types.list", "resources_generate_template": "resources.generate_template", "resources_get": "resources.get", "resources_list": "resources.list", "resources_mark_unhealthy": "resources.mark_unhealthy", "resources_metadata": "resources.metadata", "resources_signal": "resources.signal", "services_list": "services.list", "software_configs_create": "software_configs.create", "software_configs_delete": "software_configs.delete", "software_configs_get": "software_configs.get", "software_configs_list": "software_configs.list", "software_deployments_create": "software_deployments.create", "software_deployments_delete": "software_deployments.delete", "software_deployments_get": "software_deployments.get", "software_deployments_list": "software_deployments.list", "software_deployments_metadata": "software_deployments.metadata", "software_deployments_update": "software_deployments.update", "stacks_abandon": "stacks.abandon", "stacks_create": "stacks.create", "stacks_delete": "stacks.delete", "stacks_environment": "stacks.environment", "stacks_get": "stacks.get", "stacks_list": "stacks.list", "stacks_output_list": "stacks.output_list", "stacks_output_show": "stacks.output_show", "stacks_preview": "stacks.preview", "stacks_preview_update": "stacks.preview_update", "stacks_restore": "stacks.restore", "stacks_snapshot": "stacks.snapshot", "stacks_snapshot_delete": "stacks.snapshot_delete", "stacks_snapshot_list": "stacks.snapshot_list", "stacks_snapshot_show": "stacks.snapshot_show", "stacks_template": "stacks.template", "stacks_update": "stacks.update", "stacks_validate": "stacks.validate", "template_versions_get": "template_versions.get", "template_versions_list": "template_versions.list" }, "aodh": { "_comment": "It uses aodhclient.v2.", "capabilities_list": "capabilities.list", "alarm_create": "alarm.create", "alarm_delete": "alarm.delete", "alarm_get": "alarm.get", "alarm_get_state": "alarm.get_state", "alarm_list": "alarm.list", "alarm_set_state": "alarm.set_state", "alarm_update": "alarm.update", "alarm_query": "alarm.query", "alarm_history_get": "alarm_history.get", "alarm_history_search": "alarm_history.search" }, "gnocchi":{ "_comment": "It uses gnocchiclient.v1.", "archive_policy_create": "archive_policy.create", "archive_policy_delete": "archive_policy.delete", "archive_policy_get": "archive_policy.get", "archive_policy_list": "archive_policy.list", "archive_policy_update": "archive_policy.update", "archive_policy_rule_create": "archive_policy_rule.create", "archive_policy_rule_delete": "archive_policy_rule.delete", "archive_policy_rule_get": "archive_policy_rule.get", "archive_polociy_rule_list": "archive_policy_rule.list", "capabilities_list": "capabilities.list", "measures_add": "metric.add_measures", "metric_batch_metrics_measures": "metric.batch_metrics_measures", "metric_batch_resources_metrics_measures": "metric.batch_resources_metrics_measures", "metric_create": "metric.create", "metric_delete": "metric.delete", "metric_get": "metric.get", "measures_get": "metric.get_measures", "metric_list": "metric.list", "resource_batch_delete_resource": "resource.batch_delete", "resource_create": "resource.create", "resource_delete": "resource.delete", "resource_get": "resource.get", "resource_history": "resource.history", "resource_list": "resource.list", "resource_search": "resource.search", "resource_update": "resource.update", "resource_type_create": "resource_type.create", "resource_type_delete": "resource_type.delete", "resource_type_get": "resource_type.get", "resource_type_list": "resource_type.list", "resource_type_update": "resource_type.update", "measures_aggregation": "metric.aggregation", "status": "status.get" }, "neutron": { "_comment": "It uses neutronclient.v2_0.", "add_gateway_router": "add_gateway_router", "add_interface_router": "add_interface_router", "add_network_to_dhcp_agent": "add_network_to_dhcp_agent", "add_router_to_l3_agent": "add_router_to_l3_agent", "associate_health_monitor": "associate_health_monitor", "connect_network_gateway": "connect_network_gateway", "create_ext": "create_ext", "create_firewall": "create_firewall", "create_firewall_policy": "create_firewall_policy", "create_firewall_rule": "create_firewall_rule", "create_floatingip": "create_floatingip", "create_gateway_device": "create_gateway_device", "create_health_monitor": "create_health_monitor", "create_ikepolicy": "create_ikepolicy", "create_ipsec_site_connection": "create_ipsec_site_connection", "create_ipsecpolicy": "create_ipsecpolicy", "create_lbaas_healthmonitor": "create_lbaas_healthmonitor", "create_lbaas_member": "create_lbaas_member", "create_lbaas_pool": "create_lbaas_pool", "create_listener": "create_listener", "create_loadbalancer": "create_loadbalancer", "create_member": "create_member", "create_metering_label": "create_metering_label", "create_metering_label_rule": "create_metering_label_rule", "create_network": "create_network", "create_network_gateway": "create_network_gateway", "create_pool": "create_pool", "create_port": "create_port", "create_qos_queue": "create_qos_queue", "create_router": "create_router", "create_security_group": "create_security_group", "create_security_group_rule": "create_security_group_rule", "create_subnet": "create_subnet", "create_subnetpool": "create_subnetpool", "create_vip": "create_vip", "create_vpnservice": "create_vpnservice", "delete_agent": "delete_agent", "delete_ext": "delete_ext", "delete_firewall": "delete_firewall", "delete_firewall_policy": "delete_firewall_policy", "delete_firewall_rule": "delete_firewall_rule", "delete_floatingip": "delete_floatingip", "delete_gateway_device": "delete_gateway_device", "delete_health_monitor": "delete_health_monitor", "delete_ikepolicy": "delete_ikepolicy", "delete_ipsec_site_connection": "delete_ipsec_site_connection", "delete_ipsecpolicy": "delete_ipsecpolicy", "delete_lbaas_healthmonitor": "delete_lbaas_healthmonitor", "delete_lbaas_member": "delete_lbaas_member", "delete_lbaas_pool": "delete_lbaas_pool", "delete_listener": "delete_listener", "delete_loadbalancer": "delete_loadbalancer", "delete_member": "delete_member", "delete_metering_label": "delete_metering_label", "delete_metering_label_rule": "delete_metering_label_rule", "delete_network": "delete_network", "delete_network_gateway": "delete_network_gateway", "delete_pool": "delete_pool", "delete_port": "delete_port", "delete_qos_queue": "delete_qos_queue", "delete_quota": "delete_quota", "delete_router": "delete_router", "delete_security_group": "delete_security_group", "delete_security_group_rule": "delete_security_group_rule", "delete_subnet": "delete_subnet", "delete_subnetpool": "delete_subnetpool", "delete_vip": "delete_vip", "delete_vpnservice": "delete_vpnservice", "disassociate_health_monitor": "disassociate_health_monitor", "disconnect_network_gateway": "disconnect_network_gateway", "extend_create": "extend_create", "extend_delete": "extend_delete", "extend_list": "extend_list", "extend_show": "extend_show", "extend_update": "extend_update", "firewall_policy_insert_rule": "firewall_policy_insert_rule", "firewall_policy_remove_rule": "firewall_policy_remove_rule", "get_lbaas_agent_hosting_loadbalancer": "get_lbaas_agent_hosting_loadbalancer", "get_lbaas_agent_hosting_pool": "get_lbaas_agent_hosting_pool", "get_quotas_tenant": "get_quotas_tenant", "list_agents": "list_agents", "list_dhcp_agent_hosting_networks": "list_dhcp_agent_hosting_networks", "list_ext": "list_ext", "list_extensions": "list_extensions", "list_firewall_policies": "list_firewall_policies", "list_firewall_rules": "list_firewall_rules", "list_firewalls": "list_firewalls", "list_floatingips": "list_floatingips", "list_gateway_devices": "list_gateway_devices", "list_health_monitors": "list_health_monitors", "list_ikepolicies": "list_ikepolicies", "list_ipsec_site_connections": "list_ipsec_site_connections", "list_ipsecpolicies": "list_ipsecpolicies", "list_l3_agent_hosting_routers": "list_l3_agent_hosting_routers", "list_lbaas_healthmonitors": "list_lbaas_healthmonitors", "list_lbaas_loadbalancers": "list_lbaas_loadbalancers", "list_lbaas_members": "list_lbaas_members", "list_lbaas_pools": "list_lbaas_pools", "list_listeners": "list_listeners", "list_loadbalancers": "list_loadbalancers", "list_loadbalancers_on_lbaas_agent": "list_loadbalancers_on_lbaas_agent", "list_members": "list_members", "list_metering_label_rules": "list_metering_label_rules", "list_metering_labels": "list_metering_labels", "list_network_gateways": "list_network_gateways", "list_networks": "list_networks", "list_networks_on_dhcp_agent": "list_networks_on_dhcp_agent", "list_pools": "list_pools", "list_pools_on_lbaas_agent": "list_pools_on_lbaas_agent", "list_ports": "list_ports", "list_qos_queues": "list_qos_queues", "list_quotas": "list_quotas", "list_routers": "list_routers", "list_routers_on_l3_agent": "list_routers_on_l3_agent", "list_security_group_rules": "list_security_group_rules", "list_security_groups": "list_security_groups", "list_service_providers": "list_service_providers", "list_subnetpools": "list_subnetpools", "list_subnets": "list_subnets", "list_vips": "list_vips", "list_vpnservices": "list_vpnservices", "remove_gateway_router": "remove_gateway_router", "remove_interface_router": "remove_interface_router", "remove_network_from_dhcp_agent": "remove_network_from_dhcp_agent", "remove_router_from_l3_agent": "remove_router_from_l3_agent", "retrieve_pool_stats": "retrieve_pool_stats", "show_agent": "show_agent", "show_ext": "show_ext", "show_extension": "show_extension", "show_firewall": "show_firewall", "show_firewall_policy": "show_firewall_policy", "show_firewall_rule": "show_firewall_rule", "show_floatingip": "show_floatingip", "show_gateway_device": "show_gateway_device", "show_health_monitor": "show_health_monitor", "show_ikepolicy": "show_ikepolicy", "show_ipsec_site_connection": "show_ipsec_site_connection", "show_ipsecpolicy": "show_ipsecpolicy", "show_lbaas_healthmonitor": "show_lbaas_healthmonitor", "show_lbaas_member": "show_lbaas_member", "show_lbaas_pool": "show_lbaas_pool", "show_listener": "show_listener", "show_loadbalancer": "show_loadbalancer", "show_member": "show_member", "show_metering_label": "show_metering_label", "show_metering_label_rule": "show_metering_label_rule", "show_network": "show_network", "show_network_gateway": "show_network_gateway", "show_pool": "show_pool", "show_port": "show_port", "show_qos_queue": "show_qos_queue", "show_quota": "show_quota", "show_router": "show_router", "show_security_group": "show_security_group", "show_security_group_rule": "show_security_group_rule", "show_subnet": "show_subnet", "show_subnetpool": "show_subnetpool", "show_vip": "show_vip", "show_vpnservice": "show_vpnservice", "update_agent": "update_agent", "update_ext": "update_ext", "update_firewall": "update_firewall", "update_firewall_policy": "update_firewall_policy", "update_firewall_rule": "update_firewall_rule", "update_floatingip": "update_floatingip", "update_gateway_device": "update_gateway_device", "update_health_monitor": "update_health_monitor", "update_ikepolicy": "update_ikepolicy", "update_ipsec_site_connection": "update_ipsec_site_connection", "update_ipsecpolicy": "update_ipsecpolicy", "update_lbaas_healthmonitor": "update_lbaas_healthmonitor", "update_lbaas_member": "update_lbaas_member", "update_lbaas_pool": "update_lbaas_pool", "update_listener": "update_listener", "update_loadbalancer": "update_loadbalancer", "update_member": "update_member", "update_network": "update_network", "update_network_gateway": "update_network_gateway", "update_pool": "update_pool", "update_port": "update_port", "update_quota": "update_quota", "update_router": "update_router", "update_security_group": "update_security_group", "update_subnet": "update_subnet", "update_subnetpool": "update_subnetpool", "update_vip": "update_vip", "update_vpnservice": "update_vpnservice" }, "cinder": { "_comment": "It uses cinderclient.v2.", "availability_zones_find": "availability_zones.find", "availability_zones_findall": "availability_zones.findall", "availability_zones_list": "availability_zones.list", "backups_create": "backups.create", "backups_delete": "backups.delete", "backups_export_record": "backups.export_record", "backups_find": "backups.find", "backups_findall": "backups.findall", "backups_get": "backups.get", "backups_import_record": "backups.import_record", "backups_list": "backups.list", "backups_reset_state": "backups.reset_state", "capabilities_get": "capabilities.get", "cgsnapshots_create": "cgsnapshots.create", "cgsnapshots_delete": "cgsnapshots.delete", "cgsnapshots_find": "cgsnapshots.find", "cgsnapshots_findall": "cgsnapshots.findall", "cgsnapshots_get": "cgsnapshots.get", "cgsnapshots_list": "cgsnapshots.list", "cgsnapshots_update": "cgsnapshots.update", "consistencygroups_create": "consistencygroups.create", "consistencygroups_create_from_src": "consistencygroups.create_from_src", "consistencygroups_delete": "consistencygroups.delete", "consistencygroups_find": "consistencygroups.find", "consistencygroups_findall": "consistencygroups.findall", "consistencygroups_get": "consistencygroups.get", "consistencygroups_list": "consistencygroups.list", "consistencygroups_update": "consistencygroups.update", "limits_get": "limits.get", "pools_list": "pools.list", "qos_specs_associate": "qos_specs.associate", "qos_specs_create": "qos_specs.create", "qos_specs_delete": "qos_specs.delete", "qos_specs_disassociate": "qos_specs.disassociate", "qos_specs_disassociate_all": "qos_specs.disassociate_all", "qos_specs_find": "qos_specs.find", "qos_specs_findall": "qos_specs.findall", "qos_specs_get": "qos_specs.get", "qos_specs_get_associations": "qos_specs.get_associations", "qos_specs_list": "qos_specs.list", "qos_specs_set_keys": "qos_specs.set_keys", "qos_specs_unset_keys": "qos_specs.unset_keys", "quota_classes_get": "quota_classes.get", "quota_classes_update": "quota_classes.update", "quotas_defaults": "quotas.defaults", "quotas_delete": "quotas.delete", "quotas_get": "quotas.get", "quotas_update": "quotas.update", "restores_restore": "restores.restore", "services_disable": "services.disable", "services_disable_log_reason": "services.disable_log_reason", "services_enable": "services.enable", "services_find": "services.find", "services_findall": "services.findall", "services_list": "services.list", "transfers_accept": "transfers.accept", "transfers_create": "transfers.create", "transfers_delete": "transfers.delete", "transfers_find": "transfers.find", "transfers_findall": "transfers.findall", "transfers_get": "transfers.get", "transfers_list": "transfers.list", "volume_encryption_types_create": "volume_encryption_types.create", "volume_encryption_types_delete": "volume_encryption_types.delete", "volume_encryption_types_find": "volume_encryption_types.find", "volume_encryption_types_findall": "volume_encryption_types.findall", "volume_encryption_types_get": "volume_encryption_types.get", "volume_encryption_types_list": "volume_encryption_types.list", "volume_encryption_types_update": "volume_encryption_types.update", "volume_snapshots_create": "volume_snapshots.create", "volume_snapshots_delete": "volume_snapshots.delete", "volume_snapshots_delete_metadata": "volume_snapshots.delete_metadata", "volume_snapshots_find": "volume_snapshots.find", "volume_snapshots_findall": "volume_snapshots.findall", "volume_snapshots_get": "volume_snapshots.get", "volume_snapshots_list": "volume_snapshots.list", "volume_snapshots_reset_state": "volume_snapshots.reset_state", "volume_snapshots_set_metadata": "volume_snapshots.set_metadata", "volume_snapshots_update": "volume_snapshots.update", "volume_snapshots_update_all_metadata": "volume_snapshots.update_all_metadata", "volume_snapshots_update_snapshot_status": "volume_snapshots.update_snapshot_status", "volume_type_access_add_project_access": "volume_type_access.add_project_access", "volume_type_access_find": "volume_type_access.find", "volume_type_access_findall": "volume_type_access.findall", "volume_type_access_list": "volume_type_access.list", "volume_type_access_remove_project_access": "volume_type_access.remove_project_access", "volume_types_create": "volume_types.create", "volume_types_default": "volume_types.default", "volume_types_delete": "volume_types.delete", "volume_types_find": "volume_types.find", "volume_types_findall": "volume_types.findall", "volume_types_get": "volume_types.get", "volume_types_list": "volume_types.list", "volume_types_update": "volume_types.update", "volumes_attach": "volumes.attach", "volumes_begin_detaching": "volumes.begin_detaching", "volumes_create": "volumes.create", "volumes_delete": "volumes.delete", "volumes_delete_image_metadata": "volumes.delete_image_metadata", "volumes_delete_metadata": "volumes.delete_metadata", "volumes_detach": "volumes.detach", "volumes_extend": "volumes.extend", "volumes_find": "volumes.find", "volumes_findall": "volumes.findall", "volumes_force_delete": "volumes.force_delete", "volumes_get": "volumes.get", "volumes_get_encryption_metadata": "volumes.get_encryption_metadata", "volumes_get_pools": "volumes.get_pools", "volumes_initialize_connection": "volumes.initialize_connection", "volumes_list": "volumes.list", "volumes_manage": "volumes.manage", "volumes_migrate_volume": "volumes.migrate_volume", "volumes_migrate_volume_completion": "volumes.migrate_volume_completion", "volumes_promote": "volumes.promote", "volumes_reenable": "volumes.reenable", "volumes_reserve": "volumes.reserve", "volumes_reset_state": "volumes.reset_state", "volumes_retype": "volumes.retype", "volumes_roll_detaching": "volumes.roll_detaching", "volumes_set_bootable": "volumes.set_bootable", "volumes_set_image_metadata": "volumes.set_image_metadata", "volumes_set_metadata": "volumes.set_metadata", "volumes_show_image_metadata": "volumes.show_image_metadata", "volumes_terminate_connection": "volumes.terminate_connection", "volumes_unmanage": "volumes.unmanage", "volumes_unreserve": "volumes.unreserve", "volumes_update": "volumes.update", "volumes_update_all_metadata": "volumes.update_all_metadata", "volumes_update_readonly_flag": "volumes.update_readonly_flag", "volumes_upload_to_image": "volumes.upload_to_image" }, "trove": { "_comment": "It uses troveclient.v1.", "backups_create": "backups.create", "backups_delete": "backups.delete", "backups_find": "backups.find", "backups_findall": "backups.findall", "backups_get": "backups.get", "backups_list": "backups.list", "clusters_add_shard": "clusters.add_shard", "clusters_create": "clusters.create", "clusters_delete": "clusters.delete", "clusters_find": "clusters.find", "clusters_findall": "clusters.findall", "clusters_get": "clusters.get", "clusters_grow": "clusters.grow", "clusters_list": "clusters.list", "clusters_shrink": "clusters.shrink", "configuration_parameters_find": "configuration_parameters.find", "configuration_parameters_findall": "configuration_parameters.findall", "configuration_parameters_get_parameter": "configuration_parameters.get_parameter", "configuration_parameters_get_parameter_by_version": "configuration_parameters.get_parameter_by_version", "configuration_parameters_list": "configuration_parameters.list", "configuration_parameters_parameters": "configuration_parameters.parameters", "configuration_parameters_parameters_by_version": "configuration_parameters.parameters_by_version", "configurations_create": "configurations.create", "configurations_delete": "configurations.delete", "configurations_edit": "configurations.edit", "configurations_find": "configurations.find", "configurations_findall": "configurations.findall", "configurations_get": "configurations.get", "configurations_instances": "configurations.instances", "configurations_list": "configurations.list", "configurations_update": "configurations.update", "databases_create": "databases.create", "databases_delete": "databases.delete", "databases_find": "databases.find", "databases_findall": "databases.findall", "databases_list": "databases.list", "datastore_versions_find": "datastore_versions.find", "datastore_versions_findall": "datastore_versions.findall", "datastore_versions_get": "datastore_versions.get", "datastore_versions_get_by_uuid": "datastore_versions.get_by_uuid", "datastore_versions_list": "datastore_versions.list", "datastore_versions_update": "datastore_versions.update", "datastores_find": "datastores.find", "datastores_findall": "datastores.findall", "datastores_get": "datastores.get", "datastores_list": "datastores.list", "flavors_find": "flavors.find", "flavors_findall": "flavors.findall", "flavors_get": "flavors.get", "flavors_list": "flavors.list", "flavors_list_datastore_version_associated_flavors": "flavors.list_datastore_version_associated_flavors", "instances_backups": "instances.backups", "instances_configuration": "instances.configuration", "instances_create": "instances.create", "instances_delete": "instances.delete", "instances_edit": "instances.edit", "instances_eject_replica_source": "instances.eject_replica_source", "instances_find": "instances.find", "instances_findall": "instances.findall", "instances_get": "instances.get", "instances_list": "instances.list", "instances_modify": "instances.modify", "instances_promote_to_replica_source": "instances.promote_to_replica_source", "instances_resize_instance": "instances.resize_instance", "instances_resize_volume": "instances.resize_volume", "instances_restart": "instances.restart", "limits_find": "limits.find", "limits_findall": "limits.findall", "limits_list": "limits.list", "metadata_create": "metadata.create", "metadata_delete": "metadata.delete", "metadata_edit": "metadata.edit", "metadata_list": "metadata.list", "metadata_show": "metadata.show", "metadata_update": "metadata.update", "root_create": "root.create", "root_create_cluster_root": "root.create_cluster_root", "root_create_instance_root": "root.create_instance_root", "root_delete": "root.delete", "root_disable_instance_root": "root.disable_instance_root", "root_find": "root.find", "root_findall": "root.findall", "root_is_cluster_root_enabled": "root.is_cluster_root_enabled", "root_is_instance_root_enabled": "root.is_instance_root_enabled", "root_is_root_enabled": "root.is_root_enabled", "root_list": "root.list", "security_group_rules_create": "security_group_rules.create", "security_group_rules_delete": "security_group_rules.delete", "security_group_rules_find": "security_group_rules.find", "security_group_rules_findall": "security_group_rules.findall", "security_group_rules_list": "security_group_rules.list", "security_groups_find": "security_groups.find", "security_groups_findall": "security_groups.findall", "security_groups_get": "security_groups.get", "security_groups_list": "security_groups.list", "users_change_passwords": "users.change_passwords", "users_create": "users.create", "users_delete": "users.delete", "users_find": "users.find", "users_findall": "users.findall", "users_get": "users.get", "users_grant": "users.grant", "users_list": "users.list", "users_list_access": "users.list_access", "users_revoke": "users.revoke", "users_update_attributes": "users.update_attributes" }, "ironic": { "_comment": "It uses ironicclient.v1.", "chassis_create": "chassis.create", "chassis_delete": "chassis.delete", "chassis_get": "chassis.get", "chassis_list": "chassis.list", "chassis_list_nodes": "chassis.list_nodes", "chassis_update": "chassis.update", "driver_delete": "driver.delete", "driver_get": "driver.get", "driver_get_vendor_passthru_methods": "driver.get_vendor_passthru_methods", "driver_list": "driver.list", "driver_properties": "driver.properties", "driver_raid_logical_disk_properties": "driver.raid_logical_disk_properties", "driver_update": "driver.update", "driver_vendor_passthru": "driver.vendor_passthru", "node_create": "node.create", "node_delete": "node.delete", "node_get": "node.get", "node_get_boot_device": "node.get_boot_device", "node_get_by_instance_uuid": "node.get_by_instance_uuid", "node_get_console": "node.get_console", "node_get_supported_boot_devices": "node.get_supported_boot_devices", "node_get_vendor_passthru_methods": "node.get_vendor_passthru_methods", "node_list": "node.list", "node_list_ports": "node.list_ports", "node_set_boot_device": "node.set_boot_device", "node_set_console_mode": "node.set_console_mode", "node_set_maintenance": "node.set_maintenance", "node_set_power_state": "node.set_power_state", "node_set_provision_state": "node.set_provision_state", "node_set_target_raid_config": "node.set_target_raid_config", "node_states": "node.states", "node_update": "node.update", "node_validate": "node.validate", "node_vendor_passthru": "node.vendor_passthru", "node_vif_attach": "node.vif_attach", "node_vif_detach": "node.vif_detach", "node_vif_list": "node.vif_list", "node_wait_for_provision_state": "node.wait_for_provision_state", "port_create": "port.create", "port_delete": "port.delete", "port_get": "port.get", "port_get_by_address": "port.get_by_address", "port_list": "port.list", "port_update": "port.update" }, "baremetal_introspection": { "_comment": "It uses ironic_inspector_client.v1.", "abort": "abort", "introspect": "introspect", "get_status": "get_status", "get_data": "get_data", "rules_create": "rules.create", "rules_delete": "rules.delete", "rules_delete_all": "rules.delete_all", "rules_from_json": "rules.from_json", "rules_get": "rules.get", "rules_get_all": "rules.get_all", "wait_for_finish": "wait_for_finish" }, "swift": { "_comment": "It uses swiftclient.v1.", "head_account": "head_account", "get_account": "get_account", "post_account": "post_account", "head_container": "head_container", "get_container": "get_container", "put_container": "put_container", "post_container": "post_container", "delete_container": "delete_container", "head_object": "head_object", "get_object": "get_object", "put_object": "put_object", "post_object": "post_object", "delete_object": "delete_object", "copy_object": "copy_object", "get_capabilities": "get_capabilities" }, "zaqar": { "_comment": "It uses zaqarclient.v2.", "claim_messages": "claim_messages", "delete_messages": "delete_messages", "queue_messages": "queue_messages", "queue_post": "queue_post", "queue_pop": "queue_pop" }, "barbican": { "_comment": "It uses barbicanclient", "cas_get": "cas.get", "cas_list": "cas.list", "cas_total": "cas.total", "containers_create": "containers.create", "containers_create_certificate": "containers.create_certificate", "containers_create_rsa": "containers.create_rsa", "containers_delete": "containers.delete", "containers_get": "containers.get", "containers_list": "containers.list", "containers_register_consumer": "containers.register_consumer", "containers_remove_consumer": "containers.remove_consumer", "containers_total": "containers.total", "orders_create": "orders.create", "orders_create_asymmetric": "orders.create_asymmetric", "orders_create_certificate": "orders.create_certificate", "orders_create_key": "orders.create_key", "orders_delete": "orders.delete", "orders_get": "orders.get", "orders_list": "orders.list", "orders_total": "orders.total", "secrets_create": "secrets.create", "secrets_delete": "secrets.delete", "secrets_get": "secrets.get", "secrets_list": "secrets.list", "secrets_total": "secrets.total" }, "mistral": { "_comment": "It uses mistralclient.v2.", "action_executions_create": "action_executions.create", "action_executions_delete": "action_executions.delete", "action_executions_find": "action_executions.find", "action_executions_get": "action_executions.get", "action_executions_list": "action_executions.list", "action_executions_update": "action_executions.update", "actions_create": "actions.create", "actions_delete": "actions.delete", "actions_find": "actions.find", "actions_get": "actions.get", "actions_list": "actions.list", "actions_update": "actions.update", "cron_triggers_create": "cron_triggers.create", "cron_triggers_delete": "cron_triggers.delete", "cron_triggers_find": "cron_triggers.find", "cron_triggers_get": "cron_triggers.get", "cron_triggers_list": "cron_triggers.list", "environments_create": "environments.create", "environments_delete": "environments.delete", "environments_find": "environments.find", "environments_get": "environments.get", "environments_list": "environments.list", "environments_update": "environments.update", "executions_create": "executions.create", "executions_delete": "executions.delete", "executions_find": "executions.find", "executions_get": "executions.get", "executions_list": "executions.list", "executions_update": "executions.update", "members_create": "members.create", "members_delete": "members.delete", "members_find": "members.find", "members_get": "members.get", "members_list": "members.list", "members_update": "members.update", "services_find": "services.find", "services_list": "services.list", "tasks_find": "tasks.find", "tasks_get": "tasks.get", "tasks_list": "tasks.list", "tasks_rerun": "tasks.rerun", "workbooks_create": "workbooks.create", "workbooks_delete": "workbooks.delete", "workbooks_find": "workbooks.find", "workbooks_get": "workbooks.get", "workbooks_list": "workbooks.list", "workbooks_update": "workbooks.update", "workbooks_validate": "workbooks.validate", "workflows_create": "workflows.create", "workflows_delete": "workflows.delete", "workflows_find": "workflows.find", "workflows_get": "workflows.get", "workflows_list": "workflows.list", "workflows_update": "workflows.update", "workflows_validate": "workflows.validate" }, "designate": { "_comment": "It uses designateclient.v1.", "diagnostics_ping": "diagnostics.ping", "domains_create ": "domains.create", "domains_delete": "domains.delete", "domains_get": "domains.get", "domains_list": "domains.list", "domains_list_domain_servers": "domains.list_domain_servers", "domains_update": "domains.update", "quotas_get": "quotas.get", "quotas_reset": "quotas.reset", "quotas_update": "quotas.update", "records_create": "records.create", "records_delete": "records.delete", "records_get": "records.get", "records_list": "records.list", "records_update": "records.update", "reports_count_all": "reports.count_all", "reports_count_domains": "reports.count_domains", "reports_count_records": "reports.count_records", "reports_count_tenants": "reports.count_tenants", "reports_tenant_domains": "reports.tenant_domains", "reports_tenants_all": "reports.tenants_all", "servers_create": "servers.create", "servers_delete": "servers.delete", "servers_get": "servers.get", "servers_list": "servers.list", "servers_update": "servers.update", "sync_sync_all": "sync.sync_all", "sync_sync_domain": "sync.sync_domain", "sync_sync_record": "sync.sync_record", "touch_domain": "touch.domain" }, "magnum": { "_comment": "It uses magnumclient.v1.", "baymodels_create": "baymodels.create", "baymodels_delete": "baymodels.delete", "baymodels_get": "baymodels.get", "baymodels_list": "baymodels.list", "baymodels_update": "baymodels.update", "bays_create": "bays.create", "bays_delete": "bays.delete", "bays_get": "bays.get", "bays_list": "bays.list", "bays_update": "bays.update", "certificates_create": "certificates.create", "certificates_get": "certificates.get", "certificates_rotate_ca": "certificates.rotate_ca", "mservices_list": "mservices.list" }, "murano":{ "_comment": "It uses muranoclient.v1.", "categories_add": "categories.add", "categories_delete": "categories.delete", "categories_get": "categories.get", "categories_list": "categories.list", "deployments_list": "deployments.list", "deployments_reports": "deployments.reports", "env_templates_clone": "env_templates.clone", "env_templates_create": "env_templates.create", "env_templates_create_app": "env_templates.create_app", "env_templates_create_env": "env_templates.create_env", "env_templates_delete": "env_templates.delete", "env_templates_delete_app": "env_templates.delete_app", "env_templates_get": "env_templates.get", "env_templates_list": "env_templates.list", "env_templates_update": "env_templates.update", "environments_create": "environments.create", "environments_delete": "environments.delete", "environments_find": "environments.find", "environments_findall": "environments.findall", "environments_get": "environments.get", "environments_last_status": "environments.last_status", "environments_list": "environments.list", "environments_update": "environments.update", "instance_statistics_get": "instance_statistics.get", "instance_statistics_get_aggregated": "instance_statistics.get_aggregated", "packages_create": "packages.create", "packages_delete": "packages.delete", "packages_download": "packages.download", "packages_filter": "packages.filter", "packages_get": "packages.get", "packages_get_logo": "packages.get_logo", "packages_get_supplier_logo": "packages.get_supplier_logo", "packages_get_ui": "packages.get_ui", "packages_list": "packages.list", "packages_toggle_active": "packages.toggle_active", "packages_toggle_public": "packages.toggle_public", "packages_update": "packages.update", "request_statistics_list": "request_statistics.list", "services_delete": "services.delete", "services_get": "services.get", "services_list": "services.list", "services_post": "services.post", "sessions_configure": "sessions.configure", "sessions_delete": "sessions.delete", "sessions_deploy": "sessions.deploy", "sessions_get": "sessions.get" }, "tacker":{ "_comment": "It uses tackerclient.v1_0.", "list_extensions": "list_extensions", "show_extension": "show_extension", "create_vnfd": "create_vnfd", "delete_vnfd": "delete_vnfd", "list_vnfds": "list_vnfds", "show_vnfd": "show_vnfd", "create_vnf": "create_vnf", "update_vnf": "update_vnf", "delete_vnf": "delete_vnf", "list_vnfs": "list_vnfs", "show_vnf": "show_vnf", "create_vim": "create_vim", "update_vim": "update_vim", "delete_vim": "delete_vim", "list_vims": "list_vims", "show_vim": "show_vim" }, "senlin":{ "_comment": "It uses senlinclient.v1_0.", "profile_types": "profile_types", "get_profile_type": "get_profile_type", "profiles": "profiles", "create_profile": "create_profile", "get_profile": "get_profile", "update_profile": "update_profile", "delete_profile": "delete_profile", "validate_profile": "validate_profile", "policy_types": "policy_types", "get_policy_type": "get_policy_type", "policies": "policies", "create_policy": "create_policy", "get_policy": "get_policy", "update_policy": "update_policy", "delete_policy": "delete_policy", "validate_policy": "validate_policy", "clusters": "clusters", "create_cluster": "create_cluster", "get_cluster": "get_cluster", "update_cluster": "update_cluster", "delete_cluster": "delete_cluster", "cluster_add_nodes": "cluster_add_nodes", "cluster_del_nodes": "cluster_del_nodes", "cluster_resize": "cluster_resize", "cluster_scale_out": "cluster_scale_out", "cluster_scale_in": "cluster_scale_in", "cluster_policies": "cluster_policies", "get_cluster_policy": "get_cluster_policy", "cluster_attach_policy": "cluster_attach_policy", "cluster_detach_policy": "cluster_detach_policy", "cluster_update_policy": "cluster_update_policy", "cluster_collect": "cluster_collect", "check_cluster": "check_cluster", "recover_cluster": "recover_cluster", "nodes": "nodes", "create_node": "create_node", "get_node": "get_node", "update_node": "update_node", "delete_node": "delete_node", "check_node": "check_node", "recover_node": "recover_node", "receivers": "receivers", "create_receiver": "create_receiver", "get_receiver": "get_receiver", "delete_receiver": "delete_receiver", "events": "events", "get_event": "get_event", "actions": "actions", "get_action": "get_action" }, "glare": { "_comment": "It uses glareclient.v1.", "artifacts_create": "artifacts.create", "artifacts_delete": "artifacts.delete", "artifacts_get": "artifacts.get", "artifacts_list": "artifacts.list", "artifacts_update": "artifacts.update", "artifacts_activate": "artifacts.activate", "artifacts_deactivate": "artifacts.deactivate", "artifacts_reactivate": "artifacts.reactivate", "artifacts_publish": "artifacts.publish", "artifacts_add_tag": "artifacts.add_tag", "artifacts_remove_tag": "artifacts.remove_tag", "artifacts_get_type_list": "artifacts.get_type_list", "artifacts_get_type_schema": "artifacts.get_type_schema", "artifacts_upload_blob": "artifacts.upload_blob", "artifacts_download_blob": "artifacts.download_blob", "artifacts_add_external_location": "artifacts.add_external_location" } } mistral-6.0.0/mistral/actions/openstack/actions.py0000666000175100017510000006061713245513261022346 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools from oslo_config import cfg from oslo_log import log from oslo_utils import importutils from keystoneauth1 import session as ks_session from keystoneauth1.token_endpoint import Token from keystoneclient.auth import identity from keystoneclient import httpclient from mistral.actions.openstack import base from mistral.utils import inspect_utils from mistral.utils.openstack import keystone as keystone_utils LOG = log.getLogger(__name__) CONF = cfg.CONF IRONIC_API_VERSION = '1.34' """The default microversion to pass to Ironic API. 1.34 corresponds to Pike final. """ def _try_import(module_name): try: return importutils.try_import(module_name) except Exception as e: msg = 'Unable to load module "%s". %s' % (module_name, str(e)) LOG.error(msg) return None aodhclient = _try_import('aodhclient.v2.client') barbicanclient = _try_import('barbicanclient.client') cinderclient = _try_import('cinderclient.client') designateclient = _try_import('designateclient.v1') glanceclient = _try_import('glanceclient') glareclient = _try_import('glareclient.v1.client') gnocchiclient = _try_import('gnocchiclient.v1.client') heatclient = _try_import('heatclient.client') ironic_inspector_client = _try_import('ironic_inspector_client.v1') ironicclient = _try_import('ironicclient.v1.client') keystoneclient = _try_import('keystoneclient.v3.client') magnumclient = _try_import('magnumclient.v1.client') mistralclient = _try_import('mistralclient.api.v2.client') muranoclient = _try_import('muranoclient.v1.client') neutronclient = _try_import('neutronclient.v2_0.client') novaclient = _try_import('novaclient.client') senlinclient = _try_import('senlinclient.v1.client') swift_client = _try_import('swiftclient.client') tackerclient = _try_import('tackerclient.v1_0.client') troveclient = _try_import('troveclient.v1.client') zaqarclient = _try_import('zaqarclient.queues.client') class NovaAction(base.OpenStackAction): _service_type = 'compute' @classmethod def _get_client_class(cls): return novaclient.Client def _create_client(self, context): LOG.debug("Nova action security context: %s", context) nova_endpoint = self.get_service_endpoint() session_and_auth = self.get_session_and_auth(context) return novaclient.Client( 2, endpoint_override=nova_endpoint.url, session=session_and_auth['session'] ) @classmethod def _get_fake_client(cls): return cls._get_client_class()(2) class GlanceAction(base.OpenStackAction): _service_type = 'image' @classmethod def _get_client_class(cls): return glanceclient.Client def _create_client(self, context): LOG.debug("Glance action security context: %s", context) glance_endpoint = self.get_service_endpoint() session_and_auth = self.get_session_and_auth(context) return self._get_client_class()( '2', endpoint=glance_endpoint.url, session=session_and_auth['session'] ) @classmethod def _get_fake_client(cls): return cls._get_client_class()(endpoint="http://127.0.0.1:9292/v2") class KeystoneAction(base.OpenStackAction): _service_type = 'identity' @classmethod def _get_client_class(cls): return keystoneclient.Client def _create_client(self, context): LOG.debug("Keystone action security context: %s", context) keystone_endpoint = self.get_service_endpoint() session_and_auth = self.get_session_and_auth(context) return self._get_client_class()( endpoint=keystone_endpoint.url, session=session_and_auth['session'] ) @classmethod def _get_fake_client(cls): # Here we need to replace httpclient authenticate method temporarily authenticate = httpclient.HTTPClient.authenticate httpclient.HTTPClient.authenticate = lambda x: True fake_client = cls._get_client_class()() # Once we get fake client, return back authenticate method httpclient.HTTPClient.authenticate = authenticate return fake_client class HeatAction(base.OpenStackAction): _service_type = 'orchestration' @classmethod def _get_client_class(cls): return heatclient.Client def _create_client(self, context): LOG.debug("Heat action security context: %s", context) heat_endpoint = self.get_service_endpoint() session_and_auth = self.get_session_and_auth(context) return self._get_client_class()( '1', endpoint_override=heat_endpoint.url, session=session_and_auth['session'] ) @classmethod def _get_fake_client(cls): return cls._get_client_class()( '1', endpoint="http://127.0.0.1:8004/v1/fake" ) class NeutronAction(base.OpenStackAction): _service_type = 'network' @classmethod def _get_client_class(cls): return neutronclient.Client def _create_client(self, context): LOG.debug("Neutron action security context: %s", context) neutron_endpoint = self.get_service_endpoint() session_and_auth = self.get_session_and_auth(context) return self._get_client_class()( endpoint_override=neutron_endpoint.url, session=session_and_auth['session'] ) @classmethod def _get_fake_client(cls): return cls._get_client_class()(endpoint="http://127.0.0.1") class CinderAction(base.OpenStackAction): _service_type = 'volumev2' @classmethod def _get_client_class(cls): return cinderclient.Client def _create_client(self, context): LOG.debug("Cinder action security context: %s", context) cinder_endpoint = self.get_service_endpoint() session_and_auth = self.get_session_and_auth(context) client = self._get_client_class()( '2', endpoint_override=cinder_endpoint.url, session=session_and_auth['session'] ) return client @classmethod def _get_fake_client(cls): return cls._get_client_class()('2') class MistralAction(base.OpenStackAction): _service_type = 'workflowv2' @classmethod def _get_client_class(cls): return mistralclient.Client def _create_client(self, context): LOG.debug("Mistral action security context: %s", context) if CONF.pecan.auth_enable: session_and_auth = self.get_session_and_auth(context) return self._get_client_class()( mistral_url=session_and_auth['auth'].endpoint, **session_and_auth) else: mistral_url = 'http://{}:{}/v2'.format(CONF.api.host, CONF.api.port) return self._get_client_class()(mistral_url=mistral_url) @classmethod def _get_fake_client(cls): return cls._get_client_class()() class TroveAction(base.OpenStackAction): _service_type = 'database' @classmethod def _get_client_class(cls): return troveclient.Client def _create_client(self, context): LOG.debug("Trove action security context: %s", context) trove_endpoint = self.get_service_endpoint() trove_url = keystone_utils.format_url( trove_endpoint.url, {'tenant_id': context.project_id} ) client = self._get_client_class()( context.user_name, context.auth_token, project_id=context.project_id, auth_url=trove_url, region_name=trove_endpoint.region, insecure=context.insecure ) client.client.auth_token = context.auth_token client.client.management_url = trove_url return client @classmethod def _get_fake_client(cls): return cls._get_client_class()("fake_user", "fake_passwd") class IronicAction(base.OpenStackAction): _service_name = 'ironic' @classmethod def _get_client_class(cls): return ironicclient.Client def _create_client(self, context): LOG.debug("Ironic action security context: %s", context) ironic_endpoint = self.get_service_endpoint() return self._get_client_class()( ironic_endpoint.url, token=context.auth_token, region_name=ironic_endpoint.region, os_ironic_api_version=IRONIC_API_VERSION, insecure=context.insecure ) @classmethod def _get_fake_client(cls): return cls._get_client_class()("http://127.0.0.1:6385/") class BaremetalIntrospectionAction(base.OpenStackAction): @classmethod def _get_client_class(cls): return ironic_inspector_client.ClientV1 @classmethod def _get_fake_client(cls): try: # ironic-inspector client tries to get and validate it's own # version when created. This might require checking the keystone # catalog if the ironic-inspector server is not listening on the # localhost IP address. Thus, we get a session for this case. sess = keystone_utils.get_admin_session() return cls._get_client_class()(session=sess) except Exception as e: LOG.warning("There was an error trying to create the " "ironic-inspector client using a session: %s", str(e)) # If it's not possible to establish a keystone session, attempt to # create a client without it. This should fall back to where the # ironic-inspector client tries to get it's own version on the # default IP address. LOG.debug("Attempting to create the ironic-inspector client " "without a session.") return cls._get_client_class()() def _create_client(self, context): LOG.debug( "Baremetal introspection action security context: %s", context) inspector_endpoint = keystone_utils.get_endpoint_for_project( service_type='baremetal-introspection' ) auth = Token(endpoint=inspector_endpoint.url, token=context.auth_token) return self._get_client_class()( api_version=1, session=ks_session.Session(auth) ) class SwiftAction(base.OpenStackAction): _service_name = 'swift' @classmethod def _get_client_class(cls): return swift_client.Connection def _create_client(self, context): LOG.debug("Swift action security context: %s", context) swift_endpoint = self.get_service_endpoint() swift_url = keystone_utils.format_url( swift_endpoint.url, {'tenant_id': context.project_id} ) session_and_auth = self.get_session_and_auth(context) return self._get_client_class()( session=session_and_auth['session'], preauthurl=swift_url ) class ZaqarAction(base.OpenStackAction): _service_type = 'messaging' @classmethod def _get_client_class(cls): return zaqarclient.Client def _create_client(self, context): LOG.debug("Zaqar action security context: %s", context) zaqar_endpoint = self.get_service_endpoint() session_and_auth = self.get_session_and_auth(context) return self._get_client_class()( version=2, url=zaqar_endpoint.url, session=session_and_auth['session'] ) @classmethod def _get_fake_client(cls): return cls._get_client_class()(version=2) @classmethod def _get_client_method(cls, client): method = getattr(cls, cls.client_method_name) # We can't use partial as it's not supported by getargspec @functools.wraps(method) def wrap(*args, **kwargs): return method(client, *args, **kwargs) arguments = inspect_utils.get_arg_list_as_str(method) # Remove client wrap.__arguments__ = arguments.split(', ', 1)[1] return wrap @staticmethod def queue_messages(client, queue_name, **params): """Gets a list of messages from the queue. :param client: the Zaqar client :type client: zaqarclient.queues.client :param queue_name: Name of the target queue. :type queue_name: `six.string_type` :param params: Filters to use for getting messages. :type params: **kwargs dict :returns: List of messages. :rtype: `list` """ queue = client.queue(queue_name) return queue.messages(**params) @staticmethod def queue_post(client, queue_name, messages): """Posts one or more messages to a queue. :param client: the Zaqar client :type client: zaqarclient.queues.client :param queue_name: Name of the target queue. :type queue_name: `six.string_type` :param messages: One or more messages to post. :type messages: `list` or `dict` :returns: A dict with the result of this operation. :rtype: `dict` """ queue = client.queue(queue_name) return queue.post(messages) @staticmethod def queue_pop(client, queue_name, count=1): """Pop `count` messages from the queue. :param client: the Zaqar client :type client: zaqarclient.queues.client :param queue_name: Name of the target queue. :type queue_name: `six.string_type` :param count: Number of messages to pop. :type count: int :returns: List of messages. :rtype: `list` """ queue = client.queue(queue_name) return queue.pop(count) @staticmethod def claim_messages(client, queue_name, **params): """Claim messages from the queue :param client: the Zaqar client :type client: zaqarclient.queues.client :param queue_name: Name of the target queue. :type queue_name: `six.string_type` :returns: List of claims :rtype: `list` """ queue = client.queue(queue_name) return queue.claim(**params) @staticmethod def delete_messages(client, queue_name, messages): """Delete messages from the queue :param client: the Zaqar client :type client: zaqarclient.queues.client :param queue_name: Name of the target queue. :type queue_name: `six.string_type` :param messages: List of messages' ids to delete. :type messages: *args of `six.string_type` :returns: List of messages' ids that have been deleted :rtype: `list` """ queue = client.queue(queue_name) return queue.delete_messages(*messages) class BarbicanAction(base.OpenStackAction): @classmethod def _get_client_class(cls): return barbicanclient.Client def _create_client(self, context): LOG.debug("Barbican action security context: %s", context) barbican_endpoint = keystone_utils.get_endpoint_for_project('barbican') keystone_endpoint = keystone_utils.get_keystone_endpoint_v2() auth = identity.v2.Token( auth_url=keystone_endpoint.url, tenant_name=context.user_name, token=context.auth_token, tenant_id=context.project_id ) return self._get_client_class()( project_id=context.project_id, endpoint=barbican_endpoint.url, auth=auth, insecure=context.insecure ) @classmethod def _get_fake_client(cls): return cls._get_client_class()( project_id="1", endpoint="http://127.0.0.1:9311" ) @classmethod def _get_client_method(cls, client): if cls.client_method_name != "secrets_store": return super(BarbicanAction, cls)._get_client_method(client) method = getattr(cls, cls.client_method_name) @functools.wraps(method) def wrap(*args, **kwargs): return method(client, *args, **kwargs) arguments = inspect_utils.get_arg_list_as_str(method) # Remove client. wrap.__arguments__ = arguments.split(', ', 1)[1] return wrap @staticmethod def secrets_store(client, name=None, payload=None, algorithm=None, bit_length=None, secret_type=None, mode=None, expiration=None): """Create and Store a secret in Barbican. :param client: the Zaqar client :type client: zaqarclient.queues.client :param name: A friendly name for the Secret :type name: string :param payload: The unencrypted secret data :type payload: string :param algorithm: The algorithm associated with this secret key :type algorithm: string :param bit_length: The bit length of this secret key :type bit_length: int :param secret_type: The secret type for this secret key :type secret_type: string :param mode: The algorithm mode used with this secret keybit_length: :type mode: string :param expiration: The expiration time of the secret in ISO 8601 format :type expiration: string :returns: A new Secret object :rtype: class:`barbicanclient.secrets.Secret' """ entity = client.secrets.create( name, payload, algorithm, bit_length, secret_type, mode, expiration ) entity.store() return entity._get_formatted_entity() class DesignateAction(base.OpenStackAction): _service_type = 'dns' @classmethod def _get_client_class(cls): return designateclient.Client def _create_client(self, context): LOG.debug("Designate action security context: %s", context) designate_endpoint = self.get_service_endpoint() designate_url = keystone_utils.format_url( designate_endpoint.url, {'tenant_id': context.project_id} ) client = self._get_client_class()( endpoint=designate_url, tenant_id=context.project_id, auth_url=context.auth_uri, region_name=designate_endpoint.region, service_type='dns', insecure=context.insecure ) client.client.auth_token = context.auth_token client.client.management_url = designate_url return client @classmethod def _get_fake_client(cls): return cls._get_client_class()() class MagnumAction(base.OpenStackAction): @classmethod def _get_client_class(cls): return magnumclient.Client def _create_client(self, context): LOG.debug("Magnum action security context: %s", context) keystone_endpoint = keystone_utils.get_keystone_endpoint_v2() auth_url = keystone_endpoint.url magnum_url = keystone_utils.get_endpoint_for_project('magnum').url return self._get_client_class()( magnum_url=magnum_url, auth_token=context.auth_token, project_id=context.project_id, user_id=context.user_id, auth_url=auth_url, insecure=context.insecure ) @classmethod def _get_fake_client(cls): return cls._get_client_class()(auth_url='X', magnum_url='X') class MuranoAction(base.OpenStackAction): _service_name = 'murano' @classmethod def _get_client_class(cls): return muranoclient.Client def _create_client(self, context): LOG.debug("Murano action security context: %s", context) keystone_endpoint = keystone_utils.get_keystone_endpoint_v2() murano_endpoint = self.get_service_endpoint() return self._get_client_class()( endpoint=murano_endpoint.url, token=context.auth_token, tenant=context.project_id, region_name=murano_endpoint.region, auth_url=keystone_endpoint.url, insecure=context.insecure ) @classmethod def _get_fake_client(cls): return cls._get_client_class()("http://127.0.0.1:8082/") class TackerAction(base.OpenStackAction): _service_name = 'tacker' @classmethod def _get_client_class(cls): return tackerclient.Client def _create_client(self, context): LOG.debug("Tacker action security context: %s", context) keystone_endpoint = keystone_utils.get_keystone_endpoint_v2() tacker_endpoint = self.get_service_endpoint() return self._get_client_class()( endpoint_url=tacker_endpoint.url, token=context.auth_token, tenant_id=context.project_id, region_name=tacker_endpoint.region, auth_url=keystone_endpoint.url, insecure=context.insecure ) @classmethod def _get_fake_client(cls): return cls._get_client_class()() class SenlinAction(base.OpenStackAction): _service_name = 'senlin' @classmethod def _get_client_class(cls): return senlinclient.Client def _create_client(self, context): LOG.debug("Senlin action security context: %s", context) keystone_endpoint = keystone_utils.get_keystone_endpoint_v2() senlin_endpoint = self.get_service_endpoint() return self._get_client_class()( endpoint_url=senlin_endpoint.url, token=context.auth_token, tenant_id=context.project_id, region_name=senlin_endpoint.region, auth_url=keystone_endpoint.url, insecure=context.insecure ) @classmethod def _get_fake_client(cls): return cls._get_client_class()("http://127.0.0.1:8778") class AodhAction(base.OpenStackAction): _service_type = 'alarming' @classmethod def _get_client_class(cls): return aodhclient.Client def _create_client(self, context): LOG.debug("Aodh action security context: %s", context) aodh_endpoint = self.get_service_endpoint() endpoint_url = keystone_utils.format_url( aodh_endpoint.url, {'tenant_id': context.project_id} ) return self._get_client_class()( endpoint_url, region_name=aodh_endpoint.region, token=context.auth_token, username=context.user_name, insecure=context.insecure ) @classmethod def _get_fake_client(cls): return cls._get_client_class()() class GnocchiAction(base.OpenStackAction): _service_type = 'metric' @classmethod def _get_client_class(cls): return gnocchiclient.Client def _create_client(self, context): LOG.debug("Gnocchi action security context: %s", context) gnocchi_endpoint = self.get_service_endpoint() endpoint_url = keystone_utils.format_url( gnocchi_endpoint.url, {'tenant_id': context.project_id} ) return self._get_client_class()( endpoint_url, region_name=gnocchi_endpoint.region, token=context.auth_token, username=context.user_name ) @classmethod def _get_fake_client(cls): return cls._get_client_class()() class GlareAction(base.OpenStackAction): _service_name = 'glare' @classmethod def _get_client_class(cls): return glareclient.Client def _create_client(self, context): LOG.debug("Glare action security context: %s", context) glare_endpoint = self.get_service_endpoint() endpoint_url = keystone_utils.format_url( glare_endpoint.url, {'tenant_id': context.project_id} ) return self._get_client_class()( endpoint_url, **self.get_session_and_auth(context) ) @classmethod def _get_fake_client(cls): return cls._get_client_class()("http://127.0.0.1:9494/") mistral-6.0.0/mistral/actions/openstack/base.py0000666000175100017510000001011513245513261021604 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import inspect import traceback from oslo_log import log from mistral import exceptions as exc from mistral.utils.openstack import keystone as keystone_utils from mistral_lib import actions LOG = log.getLogger(__name__) class OpenStackAction(actions.Action): """OpenStack Action. OpenStack Action is the basis of all OpenStack-specific actions, which are constructed via OpenStack Action generators. """ _kwargs_for_run = {} client_method_name = None _service_name = None _service_type = None _client_class = None def __init__(self, **kwargs): self._kwargs_for_run = kwargs self.action_region = self._kwargs_for_run.pop('action_region', None) @abc.abstractmethod def _create_client(self, context): """Creates client required for action operation.""" return None @classmethod def _get_client_class(cls): return cls._client_class @classmethod def _get_client_method(cls, client): hierarchy_list = cls.client_method_name.split('.') attribute = client for attr in hierarchy_list: attribute = getattr(attribute, attr) return attribute @classmethod def _get_fake_client(cls): """Returns python-client instance which initiated via wrong args. It is needed for getting client-method args and description for saving into DB. """ # Default is simple _get_client_class instance return cls._get_client_class()() @classmethod def get_fake_client_method(cls): return cls._get_client_method(cls._get_fake_client()) def _get_client(self, context): """Returns python-client instance via cache or creation Gets client instance according to specific OpenStack Service (e.g. Nova, Glance, Heat, Keystone etc) """ return self._create_client(context) def get_session_and_auth(self, context): """Get keystone session and auth parameters. :param context: the action context :return: dict that can be used to initialize service clients """ return keystone_utils.get_session_and_auth( service_name=self._service_name, service_type=self._service_type, region_name=self.action_region, context=context) def get_service_endpoint(self): """Get OpenStack service endpoint. 'service_name' and 'service_type' are defined in specific OpenStack service action. """ endpoint = keystone_utils.get_endpoint_for_project( service_name=self._service_name, service_type=self._service_type, region_name=self.action_region ) return endpoint def run(self, context): try: method = self._get_client_method(self._get_client(context)) result = method(**self._kwargs_for_run) if inspect.isgenerator(result): return [v for v in result] return result except Exception as e: # Print the traceback for the last exception so that we can see # where the issue comes from. LOG.warning(traceback.format_exc()) raise exc.ActionException( "%s.%s failed: %s" % (self.__class__.__name__, self.client_method_name, str(e)) ) def test(self, context): return dict( zip(self._kwargs_for_run, ['test'] * len(self._kwargs_for_run)) ) mistral-6.0.0/mistral/actions/openstack/__init__.py0000666000175100017510000000000013245513261022421 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/actions/openstack/action_generator/0000775000175100017510000000000013245513604023644 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/actions/openstack/action_generator/base.py0000666000175100017510000001265413245513261025141 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os from oslo_config import cfg from oslo_log import log as logging import pkg_resources as pkg from mistral.actions import action_generator from mistral.utils import inspect_utils as i_u from mistral import version CONF = cfg.CONF LOG = logging.getLogger(__name__) def get_mapping(): def delete_comment(map_part): for key, value in map_part.items(): if isinstance(value, dict): delete_comment(value) if '_comment' in map_part: del map_part['_comment'] package = version.version_info.package if os.path.isabs(CONF.openstack_actions_mapping_path): mapping_file_path = CONF.openstack_actions_mapping_path else: path = CONF.openstack_actions_mapping_path mapping_file_path = pkg.resource_filename(package, path) LOG.info( "Processing OpenStack action mapping from file: %s", mapping_file_path ) with open(mapping_file_path) as fh: mapping = json.load(fh) for k, v in mapping.items(): if isinstance(v, dict): delete_comment(v) return mapping class OpenStackActionGenerator(action_generator.ActionGenerator): """OpenStackActionGenerator. Base generator for all OpenStack actions, creates a client method declaration using specific python-client and sets needed arguments to actions. """ action_namespace = None base_action_class = None @classmethod def prepare_action_inputs(cls, origin_inputs, added=[]): """Modify action input string. Sometimes we need to change the default action input definition for OpenStack actions in order to make the workflow more powerful. Examples:: >>> prepare_action_inputs('a,b,c', added=['region=RegionOne']) a, b, c, region=RegionOne >>> prepare_action_inputs('a,b,c=1', added=['region=RegionOne']) a, b, region=RegionOne, c=1 >>> prepare_action_inputs('a,b,c=1,**kwargs', added=['region=RegionOne']) a, b, region=RegionOne, c=1, **kwargs >>> prepare_action_inputs('**kwargs', added=['region=RegionOne']) region=RegionOne, **kwargs >>> prepare_action_inputs('', added=['region=RegionOne']) region=RegionOne :param origin_inputs: A string consists of action inputs, separated by comma. :param added: (Optional) A list of params to add to input string. :return: The new action input string. """ if not origin_inputs: return ", ".join(added) inputs = [i.strip() for i in origin_inputs.split(',')] kwarg_index = None for index, input in enumerate(inputs): if "=" in input: kwarg_index = index if "**" in input: kwarg_index = index - 1 kwarg_index = len(inputs) if kwarg_index is None else kwarg_index kwarg_index = kwarg_index + 1 if kwarg_index < 0 else kwarg_index for a in added: if "=" not in a: inputs.insert(0, a) kwarg_index += 1 else: inputs.insert(kwarg_index, a) return ", ".join(inputs) @classmethod def create_action_class(cls, method_name): if not method_name: return None action_class = type(str(method_name), (cls.base_action_class,), {'client_method_name': method_name}) return action_class @classmethod def create_actions(cls): mapping = get_mapping() method_dict = mapping.get(cls.action_namespace, {}) action_classes = [] for action_name, method_name in method_dict.items(): class_ = cls.create_action_class(method_name) try: client_method = class_.get_fake_client_method() except Exception: LOG.exception( "Failed to create action: %s.%s", cls.action_namespace, action_name ) continue arg_list = i_u.get_arg_list_as_str(client_method) # Support specifying region for OpenStack actions. modules = CONF.openstack_actions.modules_support_region if cls.action_namespace in modules: arg_list = cls.prepare_action_inputs( arg_list, added=['action_region=""'] ) description = i_u.get_docstring(client_method) action_classes.append( { 'class': class_, 'name': "%s.%s" % (cls.action_namespace, action_name), 'description': description, 'arg_list': arg_list, } ) return action_classes mistral-6.0.0/mistral/actions/openstack/action_generator/__init__.py0000666000175100017510000000000013245513261025744 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/actions/base.py0000666000175100017510000000753413245513261017630 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import warnings warnings.warn( "mistral.actions.Action is deprecated as of the 5.0.0 release in favor of " "mistral_lib. It will be removed in a future release.", DeprecationWarning ) class Action(object): """Action. Action is a means in Mistral to perform some useful work associated with a workflow during its execution. Every workflow task is configured with an action and when the task runs it eventually delegates to the action. When it happens task parameters get evaluated (calculating expressions, if any) and are treated as action parameters. So in a regular general purpose languages terminology action is a method declaration and task is a method call. Base action class initializer doesn't have arguments. However, concrete action classes may have any number of parameters defining action behavior. These parameters must correspond to parameters declared in action specification (e.g. using DSL or others). Action initializer may have a conventional argument with name "action_context". If it presents then action factory will fill it with a dictionary containing contextual information like execution identifier, workbook name and other that may be needed for some specific action implementations. """ @abc.abstractmethod def run(self): """Run action logic. :return: Result of the action. Note that for asynchronous actions it should always be None, however, if even it's not None it will be ignored by a caller. Result can be of two types: 1) Any serializable value meaningful from a user perspective (such as string, number or dict). 2) Instance of {mistral.workflow.utils.Result} which has field "data" for success result and field "error" for keeping so called "error result" like HTTP error code and similar. Using the second type allows to communicate a result even in case of error and hence to have conditions in "on-error" clause of direct workflows. Depending on particular action semantics one or another option may be preferable. In case if action failed and there's no need to communicate any error result this method should throw a ActionException. """ pass @abc.abstractmethod def test(self): """Returns action test result. This method runs in test mode as a test version of method run() to generate and return a representative test result. It's basically a contract for action 'dry-run' behavior specifically useful for testing and workflow designing purposes. :return: Representative action result. """ pass def is_sync(self): """Returns True if the action is synchronous, otherwise False. :return: True if the action is synchronous and method run() returns final action result. Otherwise returns False which means that a result of method run() should be ignored and a real action result is supposed to be delivered in an asynchronous manner using public API. By default, if a concrete implementation doesn't override this method then the action is synchronous. """ return True mistral-6.0.0/mistral/actions/__init__.py0000666000175100017510000000000013245513261020432 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/actions/generator_factory.py0000666000175100017510000000303613245513261022424 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import importutils from mistral.actions.openstack.action_generator import base SUPPORTED_MODULES = [ 'Nova', 'Glance', 'Keystone', 'Heat', 'Neutron', 'Cinder', 'Trove', 'Ironic', 'Baremetal Introspection', 'Swift', 'Zaqar', 'Barbican', 'Mistral', 'Designate', 'Magnum', 'Murano', 'Tacker', 'Aodh', 'Gnocchi', 'Glare' ] def all_generators(): for mod_name in SUPPORTED_MODULES: prefix = mod_name.replace(' ', '') mod_namespace = mod_name.lower().replace(' ', '_') mod_cls_name = 'mistral.actions.openstack.actions.%sAction' % prefix mod_action_cls = importutils.import_class(mod_cls_name) generator_cls_name = '%sActionGenerator' % prefix yield type( generator_cls_name, (base.OpenStackActionGenerator,), { 'action_namespace': mod_namespace, 'base_action_class': mod_action_cls } ) mistral-6.0.0/mistral/actions/action_generator.py0000666000175100017510000000176213245513261022236 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc class ActionGenerator(object): """Action generator. Action generator uses some data to build Action classes dynamically. """ @abc.abstractmethod def create_actions(self, *args, **kwargs): """Constructs classes of needed action. return: list of actions dicts containing name, class, description and parameter info. """ pass mistral-6.0.0/mistral/actions/std_actions.py0000666000175100017510000003460213245513272021226 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2014 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from email import header from email.mime import text import json import smtplib import time from oslo_log import log as logging import requests import six from mistral import exceptions as exc from mistral.utils import javascript from mistral.utils import ssh_utils from mistral_lib import actions LOG = logging.getLogger(__name__) class EchoAction(actions.Action): """Echo action. This action just returns a configured value as a result without doing anything else. The value of such action implementation is that it can be used in development (for testing), demonstration and designing of workflows themselves where echo action can play the role of temporary stub. """ def __init__(self, output): self.output = output def run(self, context): LOG.info('Running echo action [output=%s]', self.output) return self.output def test(self, context): return 'Echo' class NoOpAction(actions.Action): """No-operation action. This action does nothing. It can be mostly useful for testing and debugging purposes. """ def __init__(self): pass def run(self, context): LOG.info('Running no-op action') return None def test(self, context): return None class AsyncNoOpAction(NoOpAction): """Asynchronous no-operation action.""" def is_sync(self): return False class FailAction(actions.Action): """'Always fail' action. This action just always throws an instance of ActionException. This behavior is useful in a number of cases, especially if we need to test a scenario where some of workflow tasks fail. """ def __init__(self): pass def run(self, context): LOG.info('Running fail action.') raise exc.ActionException('Fail action expected exception.') def test(self, context): raise exc.ActionException('Fail action expected exception.') class HTTPAction(actions.Action): """Constructs an HTTP action. :param url: URL for the new HTTP request. :param method: (optional, 'GET' by default) method for the new HTTP request. :param params: (optional) Dictionary or bytes to be sent in the query string for the HTTP request. :param body: (optional) Dictionary, bytes, or file-like object to send in the body of the HTTP request. :param headers: (optional) Dictionary of HTTP Headers to send with the HTTP request. :param cookies: (optional) Dict or CookieJar object to send with the HTTP request. :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) Float describing the timeout of the request in seconds. :param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed. :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. :param verify: (optional) if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided. """ def __init__(self, url, method="GET", params=None, body=None, headers=None, cookies=None, auth=None, timeout=None, allow_redirects=None, proxies=None, verify=None): if auth and len(auth.split(':')) == 2: self.auth = (auth.split(':')[0], auth.split(':')[1]) else: self.auth = auth if isinstance(headers, dict): for key, val in headers.items(): if isinstance(val, (six.integer_types, float)): headers[key] = str(val) self.url = url self.method = method self.params = params self.body = json.dumps(body) if isinstance(body, dict) else body self.headers = headers self.cookies = cookies self.timeout = timeout self.allow_redirects = allow_redirects self.proxies = proxies self.verify = verify def run(self, context): LOG.info( "Running HTTP action " "[url=%s, method=%s, params=%s, body=%s, headers=%s," " cookies=%s, auth=%s, timeout=%s, allow_redirects=%s," " proxies=%s, verify=%s]", self.url, self.method, self.params, self.body, self.headers, self.cookies, self.auth, self.timeout, self.allow_redirects, self.proxies, self.verify ) try: resp = requests.request( self.method, self.url, params=self.params, data=self.body, headers=self.headers, cookies=self.cookies, auth=self.auth, timeout=self.timeout, allow_redirects=self.allow_redirects, proxies=self.proxies, verify=self.verify ) except Exception as e: raise exc.ActionException("Failed to send HTTP request: %s" % e) LOG.info( "HTTP action response:\n%s\n%s", resp.status_code, resp.content ) # TODO(akuznetsova): Need to refactor Mistral serialiser and # deserializer to have an ability to pass needed encoding and work # with it. Now it can process only default 'utf-8' encoding. # Appropriate bug #1676411 was created. # Represent important resp data as a dictionary. try: content = resp.json(encoding=resp.encoding) except Exception as e: LOG.debug("HTTP action response is not json.") content = resp.content if content and resp.encoding not in (None, 'utf-8'): content = content.decode(resp.encoding).encode('utf-8') _result = { 'content': content, 'status': resp.status_code, 'headers': dict(resp.headers.items()), 'url': resp.url, 'history': resp.history, 'encoding': resp.encoding, 'reason': resp.reason, 'cookies': dict(resp.cookies.items()), 'elapsed': resp.elapsed.total_seconds() } if resp.status_code not in range(200, 307): return actions.Result(error=_result) return _result def test(self, context): # TODO(rakhmerov): Implement. return None class MistralHTTPAction(HTTPAction): def run(self, context): self.headers = self.headers or {} exec_ctx = context.execution self.headers.update({ 'Mistral-Workflow-Name': exec_ctx.workflow_name, 'Mistral-Workflow-Execution-Id': exec_ctx.workflow_execution_id, 'Mistral-Task-Id': exec_ctx.task_id, 'Mistral-Action-Execution-Id': exec_ctx.action_execution_id, 'Mistral-Callback-URL': exec_ctx.callback_url, }) super(MistralHTTPAction, self).run(context) def is_sync(self): return False def test(self, context): return None class SendEmailAction(actions.Action): def __init__(self, from_addr, to_addrs, smtp_server, smtp_password=None, subject=None, body=None): # TODO(dzimine): validate parameters # Task invocation parameters. self.to = to_addrs self.subject = subject or "" self.body = body or "" # Action provider settings. self.smtp_server = smtp_server self.sender = from_addr self.password = smtp_password def run(self, context): LOG.info( "Sending email message " "[from=%s, to=%s, subject=%s, using smtp=%s, body=%s...]", self.sender, self.to, self.subject, self.smtp_server, self.body[:128] ) message = text.MIMEText(self.body, _charset='utf-8') message['Subject'] = header.Header(self.subject, 'utf-8') message['From'] = self.sender message['To'] = ', '.join(self.to) try: s = smtplib.SMTP(self.smtp_server) if self.password is not None: # Sequence to request TLS connection and log in (RFC-2487). s.ehlo() s.starttls() s.ehlo() s.login(self.sender, self.password) s.sendmail(from_addr=self.sender, to_addrs=self.to, msg=message.as_string()) except (smtplib.SMTPException, IOError) as e: raise exc.ActionException("Failed to send an email message: %s" % e) def test(self, context): # Just logging the operation since this action is not supposed # to return a result. LOG.info( "Sending email message " "[from=%s, to=%s, subject=%s, using smtp=%s, body=%s...]", self.sender, self.to, self.subject, self.smtp_server, self.body[:128] ) class SSHAction(actions.Action): """Runs Secure Shell (SSH) command on provided single or multiple hosts. It is allowed to provide either a single host or a list of hosts in action parameter 'host'. In case of a single host the action result will be a single value, otherwise a list of results provided in the same order as provided hosts. """ @property def _execute_cmd_method(self): return ssh_utils.execute_command def __init__(self, cmd, host, username, password=None, private_key_filename=None): self.cmd = cmd self.host = host self.username = username self.password = password self.private_key_filename = private_key_filename self.params = { 'cmd': self.cmd, 'host': self.host, 'username': self.username, 'password': self.password, 'private_key_filename': self.private_key_filename } def run(self, context): def raise_exc(parent_exc=None): message = ("Failed to execute ssh cmd " "'%s' on %s" % (self.cmd, self.host)) if parent_exc: message += "\nException: %s" % str(parent_exc) raise exc.ActionException(message) try: results = [] if not isinstance(self.host, list): self.host = [self.host] for host_name in self.host: self.params['host'] = host_name status_code, result = self._execute_cmd_method(**self.params) if status_code > 0: return raise_exc() else: results.append(result) if len(results) > 1: return results return result except Exception as e: return raise_exc(parent_exc=e) def test(self, context): # TODO(rakhmerov): Implement. return None class SSHProxiedAction(SSHAction): @property def _execute_cmd_method(self): return ssh_utils.execute_command_via_gateway def __init__(self, cmd, host, username, private_key_filename, gateway_host, gateway_username=None, password=None, proxy_command=None): super(SSHProxiedAction, self).__init__( cmd, host, username, password, private_key_filename ) self.gateway_host = gateway_host self.gateway_username = gateway_username self.params.update( { 'gateway_host': gateway_host, 'gateway_username': gateway_username, 'proxy_command': proxy_command } ) class JavaScriptAction(actions.Action): """Evaluates given JavaScript. """ def __init__(self, script, context=None): """Context here refers to a javasctript context Not the usual mistral context. That is passed during the run method """ self.script = script self.js_context = context def run(self, context): try: script = """function f() { %s } f() """ % self.script return javascript.evaluate(script, self.js_context) except Exception as e: raise exc.ActionException("JavaScriptAction failed: %s" % str(e)) def test(self, context): return self.script class SleepAction(actions.Action): """Sleep action. This action sleeps for given amount of seconds. It can be mostly useful for testing and debugging purposes. """ def __init__(self, seconds=1): try: self._seconds = int(seconds) self._seconds = 0 if self._seconds < 0 else self._seconds except ValueError: self._seconds = 0 def run(self, context): LOG.info('Running sleep action [seconds=%s]', self._seconds) time.sleep(self._seconds) return None def test(self, context): time.sleep(1) return None class TestDictAction(actions.Action): """Generates test dict.""" def __init__(self, size=0, key_prefix='', val=''): self.size = size self.key_prefix = key_prefix self.val = val def run(self, context): LOG.info( 'Running test_dict action [size=%s, key_prefix=%s, val=%s]', self.size, self.key_prefix, self.val ) res = {} for i in range(self.size): res['%s%s' % (self.key_prefix, i)] = self.val return res def test(self, context): return {} mistral-6.0.0/mistral/actions/action_factory.py0000666000175100017510000000170213245513261021711 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import importutils def construct_action_class(action_class_str, attributes): # Rebuild action class and restore attributes. action_class = importutils.import_class(action_class_str) unique_action_class = type( action_class.__name__, (action_class,), attributes ) return unique_action_class mistral-6.0.0/mistral/rpc/0000775000175100017510000000000013245513604015456 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/clients.py0000666000175100017510000003113413245513272017476 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from osprofiler import profiler import threading from mistral import context as auth_ctx from mistral.engine import base as eng from mistral.event_engine import base as evt_eng from mistral.executors import base as exe from mistral.rpc import base _ENGINE_CLIENT = None _ENGINE_CLIENT_LOCK = threading.Lock() _EXECUTOR_CLIENT = None _EXECUTOR_CLIENT_LOCK = threading.Lock() _EVENT_ENGINE_CLIENT = None _EVENT_ENGINE_CLIENT_LOCK = threading.Lock() def cleanup(): """Clean all the RPC clients. Intended to be used by tests to recreate all RPC related objects. Another usage is forking a child API process. In this case we must recreate all RPC objects so that they function properly. """ global _ENGINE_CLIENT global _EXECUTOR_CLIENT global _EVENT_ENGINE_CLIENT _ENGINE_CLIENT = None _EXECUTOR_CLIENT = None _EVENT_ENGINE_CLIENT = None def get_engine_client(): global _ENGINE_CLIENT global _EVENT_ENGINE_CLIENT_LOCK with _ENGINE_CLIENT_LOCK: if not _ENGINE_CLIENT: _ENGINE_CLIENT = EngineClient(cfg.CONF.engine) return _ENGINE_CLIENT def get_executor_client(): global _EXECUTOR_CLIENT global _EXECUTOR_CLIENT_LOCK with _EXECUTOR_CLIENT_LOCK: if not _EXECUTOR_CLIENT: _EXECUTOR_CLIENT = ExecutorClient(cfg.CONF.executor) return _EXECUTOR_CLIENT def get_event_engine_client(): global _EVENT_ENGINE_CLIENT global _EVENT_ENGINE_CLIENT_LOCK with _EVENT_ENGINE_CLIENT_LOCK: if not _EVENT_ENGINE_CLIENT: _EVENT_ENGINE_CLIENT = EventEngineClient(cfg.CONF.event_engine) return _EVENT_ENGINE_CLIENT class EngineClient(eng.Engine): """RPC Engine client.""" def __init__(self, rpc_conf_dict): """Constructs an RPC client for engine. :param rpc_conf_dict: Dict containing RPC configuration. """ self._client = base.get_rpc_client_driver()(rpc_conf_dict) @base.wrap_messaging_exception def start_workflow(self, wf_identifier, wf_namespace='', wf_ex_id=None, wf_input=None, description='', **params): """Starts workflow sending a request to engine over RPC. :param wf_identifier: Workflow identifier. :param wf_namespace Workflow namespace. :param wf_namespace: Workflow namespace. :param wf_input: Workflow input data as a dictionary. :param wf_ex_id: Workflow execution id. If passed, it will be set in the new execution object. :param description: Execution description. :param params: Additional workflow type specific parameters. :return: Workflow execution. """ return self._client.sync_call( auth_ctx.ctx(), 'start_workflow', wf_identifier=wf_identifier, wf_namespace=wf_namespace, wf_ex_id=wf_ex_id, wf_input=wf_input or {}, description=description, params=params ) @base.wrap_messaging_exception def start_action(self, action_name, action_input, description=None, **params): """Starts action sending a request to engine over RPC. :param action_name: Action name. :param action_input: Action input data as a dictionary. :param description: Execution description. :param params: Additional options for action running. :return: Action execution. """ return self._client.sync_call( auth_ctx.ctx(), 'start_action', action_name=action_name, action_input=action_input or {}, description=description, params=params ) @base.wrap_messaging_exception @profiler.trace('engine-client-on-action-complete', hide_args=True) def on_action_complete(self, action_ex_id, result, wf_action=False, async_=False): """Conveys action result to Mistral Engine. This method should be used by clients of Mistral Engine to update state of a action execution once action has executed. One of the clients of this method is Mistral REST API server that receives action result from the outside action handlers. Note: calling this method serves an event notifying Mistral that it possibly needs to move the workflow on, i.e. run other workflow tasks for which all dependencies are satisfied. :param action_ex_id: Action execution id. :param result: Action execution result. :param wf_action: If True it means that the given id points to a workflow execution rather than action execution. It happens when a nested workflow execution sends its result to a parent workflow. :param async_: If True, run action in asynchronous mode (w/o waiting for completion). :return: Action(or workflow if wf_action=True) execution object. """ call = self._client.async_call if async_ else self._client.sync_call return call( auth_ctx.ctx(), 'on_action_complete', action_ex_id=action_ex_id, result=result, wf_action=wf_action ) @base.wrap_messaging_exception @profiler.trace('engine-client-on-action-update', hide_args=True) def on_action_update(self, action_ex_id, state, wf_action=False, async_=False): """Conveys update of action state to Mistral Engine. This method should be used by clients of Mistral Engine to update state of a action execution once action has executed. Note: calling this method serves an event notifying Mistral that it may need to change the state of the parent task and workflow. Use on_action_complete if the action execution reached completion state. :param action_ex_id: Action execution id. :param action_ex_id: Updated state. :param wf_action: If True it means that the given id points to a workflow execution rather than action execution. It happens when a nested workflow execution sends its result to a parent workflow. :param async_: If True, run action in asynchronous mode (w/o waiting for completion). :return: Action(or workflow if wf_action=True) execution object. """ call = self._client.async_call if async_ else self._client.sync_call return call( auth_ctx.ctx(), 'on_action_update', action_ex_id=action_ex_id, state=state, wf_action=wf_action ) @base.wrap_messaging_exception def pause_workflow(self, wf_ex_id): """Stops the workflow with the given execution id. :param wf_ex_id: Workflow execution id. :return: Workflow execution. """ return self._client.sync_call( auth_ctx.ctx(), 'pause_workflow', wf_ex_id=wf_ex_id ) @base.wrap_messaging_exception def rerun_workflow(self, task_ex_id, reset=True, env=None): """Rerun the workflow. This method reruns workflow with the given execution id at the specific task execution id. :param task_ex_id: Task execution id. :param reset: If true, then reset task execution state and purge action execution for the task. :param env: Environment variables to update. :return: Workflow execution. """ return self._client.sync_call( auth_ctx.ctx(), 'rerun_workflow', task_ex_id=task_ex_id, reset=reset, env=env ) @base.wrap_messaging_exception def resume_workflow(self, wf_ex_id, env=None): """Resumes the workflow with the given execution id. :param wf_ex_id: Workflow execution id. :param env: Environment variables to update. :return: Workflow execution. """ return self._client.sync_call( auth_ctx.ctx(), 'resume_workflow', wf_ex_id=wf_ex_id, env=env ) @base.wrap_messaging_exception def stop_workflow(self, wf_ex_id, state, message=None): """Stops workflow execution with given status. Once stopped, the workflow is complete with SUCCESS or ERROR, and can not be resumed. :param wf_ex_id: Workflow execution id :param state: State assigned to the workflow: SUCCESS or ERROR :param message: Optional information string :return: Workflow execution, model.Execution """ return self._client.sync_call( auth_ctx.ctx(), 'stop_workflow', wf_ex_id=wf_ex_id, state=state, message=message ) @base.wrap_messaging_exception def rollback_workflow(self, wf_ex_id): """Rolls back the workflow with the given execution id. :param wf_ex_id: Workflow execution id. :return: Workflow execution. """ return self._client.sync_call( auth_ctx.ctx(), 'rollback_workflow', wf_ex_id=wf_ex_id ) class ExecutorClient(exe.Executor): """RPC Executor client.""" def __init__(self, rpc_conf_dict): """Constructs an RPC client for the Executor.""" self.topic = cfg.CONF.executor.topic self._client = base.get_rpc_client_driver()(rpc_conf_dict) @profiler.trace('executor-client-run-action') def run_action(self, action_ex_id, action_cls_str, action_cls_attrs, params, safe_rerun, execution_context, redelivered=False, target=None, async_=True, timeout=None): """Sends a request to run action to executor. :param action_ex_id: Action execution id. :param action_cls_str: Action class name. :param action_cls_attrs: Action class attributes. :param params: Action input parameters. :param safe_rerun: If true, action would be re-run if executor dies during execution. :param execution_context: A dict of values providing information about the current execution. :param redelivered: Tells if given action was run before on another executor. :param target: Target (group of action executors). :param async_: If True, run action in asynchronous mode (w/o waiting for completion). :param timeout: a period of time in seconds after which execution of action will be interrupted :return: Action result. """ rpc_kwargs = { 'action_ex_id': action_ex_id, 'action_cls_str': action_cls_str, 'action_cls_attrs': action_cls_attrs, 'params': params, 'safe_rerun': safe_rerun, 'execution_context': execution_context, 'timeout': timeout } rpc_client_method = (self._client.async_call if async_ else self._client.sync_call) return rpc_client_method(auth_ctx.ctx(), 'run_action', **rpc_kwargs) class EventEngineClient(evt_eng.EventEngine): """RPC EventEngine client.""" def __init__(self, rpc_conf_dict): """Constructs an RPC client for the EventEngine service.""" self._client = base.get_rpc_client_driver()(rpc_conf_dict) def create_event_trigger(self, trigger, events): return self._client.sync_call( auth_ctx.ctx(), 'create_event_trigger', trigger=trigger, events=events ) def delete_event_trigger(self, trigger, events): return self._client.sync_call( auth_ctx.ctx(), 'delete_event_trigger', trigger=trigger, events=events ) def update_event_trigger(self, trigger): return self._client.sync_call( auth_ctx.ctx(), 'update_event_trigger', trigger=trigger, ) mistral-6.0.0/mistral/rpc/oslo/0000775000175100017510000000000013245513604016432 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/oslo/oslo_server.py0000666000175100017510000000372313245513261021354 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import oslo_messaging as messaging from oslo_messaging.rpc import dispatcher from mistral import context as ctx from mistral.rpc import base as rpc class OsloRPCServer(rpc.RPCServer): def __init__(self, conf): super(OsloRPCServer, self).__init__(conf) self.topic = conf.topic self.server_id = conf.host self.queue = self.topic self.routing_key = self.topic self.channel = None self.connection = None self.endpoints = [] self.oslo_server = None def register_endpoint(self, endpoint): self.endpoints.append(endpoint) def run(self, executor='blocking'): target = messaging.Target( topic=self.topic, server=self.server_id ) # TODO(rakhmerov): rpc.get_transport() should be in oslo.messaging # related module. access_policy = dispatcher.DefaultRPCAccessPolicy self.oslo_server = messaging.get_rpc_server( rpc.get_transport(), target, self.endpoints, executor=executor, serializer=ctx.RpcContextSerializer(), access_policy=access_policy ) self.oslo_server.start() def stop(self, graceful=False): self.oslo_server.stop() if graceful: self.oslo_server.wait() def wait(self): self.oslo_server.wait() mistral-6.0.0/mistral/rpc/oslo/__init__.py0000666000175100017510000000000013245513261020532 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/oslo/oslo_client.py0000666000175100017510000000273613245513261021327 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import oslo_messaging as messaging from mistral import context as auth_ctx from mistral.rpc import base as rpc class OsloRPCClient(rpc.RPCClient): def __init__(self, conf): super(OsloRPCClient, self).__init__(conf) self.topic = conf.topic serializer = auth_ctx.RpcContextSerializer() self._client = messaging.RPCClient( rpc.get_transport(), messaging.Target(topic=self.topic), serializer=serializer ) def sync_call(self, ctx, method, target=None, **kwargs): return self._client.prepare(topic=self.topic, server=target).call( ctx, method, **kwargs ) def async_call(self, ctx, method, target=None, **kwargs): return self._client.prepare(topic=self.topic, server=target).cast( ctx, method, **kwargs ) mistral-6.0.0/mistral/rpc/base.py0000666000175100017510000001232613245513261016747 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc from oslo_config import cfg from oslo_log import log as logging import oslo_messaging as messaging from oslo_messaging.rpc import client from stevedore import driver from mistral import exceptions as exc LOG = logging.getLogger(__name__) _IMPL_CLIENT = None _IMPL_SERVER = None _TRANSPORT = None def cleanup(): """Intended to be used by tests to recreate all RPC related objects.""" global _TRANSPORT _TRANSPORT = None # TODO(rakhmerov): This method seems misplaced. Now we have different kind # of transports (oslo, kombu) and this module should not have any oslo # specific things anymore. def get_transport(): global _TRANSPORT if not _TRANSPORT: _TRANSPORT = messaging.get_rpc_transport(cfg.CONF) return _TRANSPORT def get_rpc_server_driver(): rpc_impl = cfg.CONF.rpc_implementation global _IMPL_SERVER if not _IMPL_SERVER: _IMPL_SERVER = driver.DriverManager( 'mistral.rpc.backends', '%s_server' % rpc_impl ).driver return _IMPL_SERVER def get_rpc_client_driver(): rpc_impl = cfg.CONF.rpc_implementation global _IMPL_CLIENT if not _IMPL_CLIENT: _IMPL_CLIENT = driver.DriverManager( 'mistral.rpc.backends', '%s_client' % rpc_impl ).driver return _IMPL_CLIENT def _wrap_exception_and_reraise(exception): message = "%s: %s" % (exception.__class__.__name__, exception.args[0]) raise exc.MistralException(message) def wrap_messaging_exception(method): """This decorator unwrap remote error in one of MistralException. oslo.messaging has different behavior on raising exceptions when fake or rabbit transports are used. In case of rabbit transport it raises wrapped RemoteError which forwards directly to API. Wrapped RemoteError contains one of MistralException raised remotely on Engine and for correct exception interpretation we need to unwrap and raise given exception and manually send it to API layer. """ def decorator(*args, **kwargs): try: return method(*args, **kwargs) except exc.MistralException: raise except (client.RemoteError, exc.KombuException, Exception) as e: if hasattr(e, 'exc_type') and hasattr(exc, e.exc_type): exc_cls = getattr(exc, e.exc_type) raise exc_cls(e.value) _wrap_exception_and_reraise(e) return decorator class RPCClient(object): def __init__(self, conf): """Base class for RPCClient's drivers RPC Client is responsible for sending requests to RPC Server. All RPC client drivers have to inherit from this class. :param conf: Additional config provided by upper layer. """ self.conf = conf @abc.abstractmethod def sync_call(self, ctx, method, target=None, **kwargs): """Synchronous call of RPC method. Blocks the thread and wait for method result. """ raise NotImplementedError @abc.abstractmethod def async_call(self, ctx, method, target=None, **kwargs): """Asynchronous call of RPC method. Does not block the thread, just send invoking data to the RPC server and immediately returns nothing. """ raise NotImplementedError class RPCServer(object): def __init__(self, conf): """Base class for RPCServer's drivers RPC Server should listen for request coming from RPC Clients and respond to them respectively to the registered endpoints. All RPC server drivers have to inherit from this class. :param conf: Additional config provided by upper layer. """ self.conf = conf @abc.abstractmethod def register_endpoint(self, endpoint): """Registers a new RPC endpoint. :param endpoint: an object containing methods which will be used as RPC methods. """ raise NotImplementedError @abc.abstractmethod def run(self, executor='blocking'): """Runs the RPC server. :param executor: Executor used to process incoming requests. Different implementations may support different options. """ raise NotImplementedError def stop(self, graceful=False): """Stop the RPC server. :param graceful: True if this method call should wait till all internal threads are finished. :return: """ # No-op by default. pass def wait(self): """Wait till all internal threads are finished.""" # No-op by default. pass mistral-6.0.0/mistral/rpc/__init__.py0000666000175100017510000000000013245513261017556 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/kombu/0000775000175100017510000000000013245513604016573 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/kombu/kombu_server.py0000666000175100017510000002213713245513261021656 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import amqp import socket import threading import time import kombu from oslo_config import cfg from oslo_log import log as logging from stevedore import driver from mistral import context as auth_ctx from mistral import exceptions as exc from mistral.rpc import base as rpc_base from mistral.rpc.kombu import base as kombu_base from mistral.rpc.kombu import kombu_hosts LOG = logging.getLogger(__name__) CONF = cfg.CONF _pool_opts = [ cfg.IntOpt( 'executor_thread_pool_size', default=64, deprecated_name="rpc_thread_pool_size", help='Size of executor thread pool when' ' executor is threading or eventlet.' ), ] class KombuRPCServer(rpc_base.RPCServer, kombu_base.Base): def __init__(self, conf): super(KombuRPCServer, self).__init__(conf) CONF.register_opts(_pool_opts) kombu_base.set_transport_options() self._register_mistral_serialization() self.topic = conf.topic self.server_id = conf.host self._hosts = kombu_hosts.KombuHosts(CONF) self._executor_threads = CONF.executor_thread_pool_size self.exchange = CONF.control_exchange # TODO(rakhmerov): We shouldn't rely on any properties related # to oslo.messaging. Only "transport_url" should matter. self.virtual_host = CONF.oslo_messaging_rabbit.rabbit_virtual_host self.durable_queue = CONF.oslo_messaging_rabbit.amqp_durable_queues self.auto_delete = CONF.oslo_messaging_rabbit.amqp_auto_delete self.routing_key = self.topic self.channel = None self.conn = None self._running = threading.Event() self._stopped = threading.Event() self.endpoints = [] self._worker = None self._thread = None # TODO(ddeja): Those 2 options should be gathered from config. self._sleep_time = 1 self._max_sleep_time = 10 @property def is_running(self): """Return whether server is running.""" return self._running.is_set() def run(self, executor='blocking'): if self._thread is None: self._thread = threading.Thread(target=self._run, args=(executor,)) self._thread.daemon = True self._thread.start() def _run(self, executor): """Start the server.""" self._prepare_worker(executor) while True: try: _retry_connection = False host = self._hosts.get_host() self.conn = self._make_connection( host.hostname, host.port, host.username, host.password, self.virtual_host, ) conn = kombu.connections[self.conn].acquire(block=True) exchange = self._make_exchange( self.exchange, durable=self.durable_queue, auto_delete=self.auto_delete ) queue = self._make_queue( self.topic, exchange, routing_key=self.routing_key, durable=self.durable_queue, auto_delete=self.auto_delete ) with conn.Consumer( queues=queue, callbacks=[self._process_message], ) as consumer: consumer.qos(prefetch_count=1) self._running.set() self._stopped.clear() LOG.info( "Connected to AMQP at %s:%s", host.hostname, host.port ) self._sleep_time = 1 while self.is_running: try: conn.drain_events(timeout=1) except socket.timeout: pass except KeyboardInterrupt: self.stop() LOG.info( "Server with id='%w' stopped.", self.server_id ) return except (socket.error, amqp.exceptions.ConnectionForced) as e: LOG.debug("Broker connection failed: %s", e) _retry_connection = True finally: self._stopped.set() if _retry_connection: LOG.debug( "Sleeping for %s seconds, then retrying " "connection", self._sleep_time ) time.sleep(self._sleep_time) self._sleep_time = min( self._sleep_time * 2, self._max_sleep_time ) def stop(self, graceful=False): self._running.clear() if graceful: self.wait() def wait(self): self._stopped.wait() try: self._worker.shutdown(wait=True) except AttributeError as e: LOG.warning("Cannot stop worker in graceful way: %s", e) def _get_rpc_method(self, method_name): for endpoint in self.endpoints: if hasattr(endpoint, method_name): return getattr(endpoint, method_name) return None @staticmethod def _set_auth_ctx(ctx): if not isinstance(ctx, dict): return context = auth_ctx.MistralContext.from_dict(ctx) auth_ctx.set_ctx(context) return context def publish_message(self, body, reply_to, corr_id, res_type='response'): if res_type != 'error': body = self._serialize_message({'body': body}) with kombu.producers[self.conn].acquire(block=True) as producer: producer.publish( body=body, exchange=self.exchange, routing_key=reply_to, correlation_id=corr_id, serializer='pickle' if res_type == 'error' else 'json', type=res_type ) def _on_message_safe(self, request, message): try: return self._on_message(request, message) except Exception as e: LOG.warning( "Got exception while consuming message. Exception would be " "send back to the caller." ) LOG.debug("Exceptions: %s", str(e)) # Wrap exception into another exception for compatibility # with oslo. self.publish_message( exc.KombuException(e), message.properties['reply_to'], message.properties['correlation_id'], res_type='error' ) finally: message.ack() def _on_message(self, request, message): LOG.debug('Received message %s', request) is_async = request.get('async', False) rpc_ctx = request.get('rpc_ctx') redelivered = message.delivery_info.get('redelivered') rpc_method_name = request.get('rpc_method') arguments = self._deserialize_message(request.get('arguments')) correlation_id = message.properties['correlation_id'] reply_to = message.properties['reply_to'] if redelivered is not None: rpc_ctx['redelivered'] = redelivered rpc_context = self._set_auth_ctx(rpc_ctx) rpc_method = self._get_rpc_method(rpc_method_name) if not rpc_method: raise exc.MistralException("No such method: %s" % rpc_method_name) response = rpc_method(rpc_ctx=rpc_context, **arguments) if not is_async: LOG.debug( "RPC server sent a reply [reply_to = %s, correlation_id = %s", reply_to, correlation_id ) self.publish_message( response, reply_to, correlation_id ) def register_endpoint(self, endpoint): self.endpoints.append(endpoint) def _process_message(self, request, message): self._worker.submit(self._on_message_safe, request, message) def _prepare_worker(self, executor='blocking'): mgr = driver.DriverManager('kombu_driver.executors', executor) executor_opts = {} if executor == 'threading': executor_opts['max_workers'] = self._executor_threads self._worker = mgr.driver(**executor_opts) mistral-6.0.0/mistral/rpc/kombu/examples/0000775000175100017510000000000013245513604020411 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/kombu/examples/client.py0000666000175100017510000000236013245513261022243 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys from mistral.rpc.kombu import kombu_client # Example of using Kombu based RPC client. def main(): conf = { 'user_id': 'guest', 'password': 'secret', 'exchange': 'my_exchange', 'topic': 'my_topic', 'server_id': 'host', 'host': 'localhost', 'port': 5672, 'virtual_host': '/' } kombu_rpc = kombu_client.KombuRPCClient(conf) print(" [x] Requesting ...") ctx = type('context', (object,), {'to_dict': lambda self: {}})() response = kombu_rpc.sync_call(ctx, 'fib', n=44) print(" [.] Got %r" % (response,)) if __name__ == '__main__': sys.exit(main()) mistral-6.0.0/mistral/rpc/kombu/examples/__init__.py0000666000175100017510000000000013245513261022511 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/kombu/examples/server.py0000666000175100017510000000302713245513261022274 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys from mistral.rpc.kombu import kombu_server # Simple example of endpoint of RPC server, which just # calculates given fibonacci number. class MyServer(object): cache = {0: 0, 1: 1} def fib(self, rpc_ctx, n): if self.cache.get(n) is None: self.cache[n] = (self.fib(rpc_ctx, n - 1) + self.fib(rpc_ctx, n - 2)) return self.cache[n] def get_name(self, rpc_ctx): return self.__class__.__name__ # Example of using Kombu based RPC server. def main(): conf = { 'user_id': 'guest', 'password': 'secret', 'exchange': 'my_exchange', 'topic': 'my_topic', 'server_id': 'host', 'host': 'localhost', 'port': 5672, 'virtual_host': '/' } rpc_server = kombu_server.KombuRPCServer(conf) rpc_server.register_endpoint(MyServer()) rpc_server.run() if __name__ == '__main__': sys.exit(main()) mistral-6.0.0/mistral/rpc/kombu/kombu_client.py0000666000175100017510000001551513245513261021630 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import socket import itertools import errno import six from six import moves import kombu from oslo_log import log as logging from mistral import config as cfg from mistral import exceptions as exc from mistral.rpc import base as rpc_base from mistral.rpc.kombu import base as kombu_base from mistral.rpc.kombu import kombu_hosts from mistral.rpc.kombu import kombu_listener from mistral import utils #: When connection to the RabbitMQ server breaks, the #: client will receive EPIPE socket errors. These indicate #: an error that may be fixed by retrying. This constant #: is a guess for how many times the retry may be reasonable EPIPE_RETRIES = 4 LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('rpc_response_timeout', 'mistral.config') class KombuRPCClient(rpc_base.RPCClient, kombu_base.Base): def __init__(self, conf): super(KombuRPCClient, self).__init__(conf) kombu_base.set_transport_options() self._register_mistral_serialization() self.topic = conf.topic self.server_id = conf.host self._hosts = kombu_hosts.KombuHosts(CONF) self.exchange = CONF.control_exchange self.virtual_host = CONF.oslo_messaging_rabbit.rabbit_virtual_host self.durable_queue = CONF.oslo_messaging_rabbit.amqp_durable_queues self.auto_delete = CONF.oslo_messaging_rabbit.amqp_auto_delete self._timeout = CONF.rpc_response_timeout self.routing_key = self.topic hosts = self._hosts.get_hosts() connections = [] for host in hosts: conn = self._make_connection( host.hostname, host.port, host.username, host.password, self.virtual_host ) connections.append(conn) self._connections = itertools.cycle(connections) # Create exchange. exchange = self._make_exchange( self.exchange, durable=self.durable_queue, auto_delete=self.auto_delete ) # Create queue. self.queue_name = utils.generate_unicode_uuid() self.callback_queue = kombu.Queue( self.queue_name, exchange=exchange, routing_key=self.queue_name, durable=False, exclusive=True, auto_delete=True ) self._listener = kombu_listener.KombuRPCListener( connections=self._connections, callback_queue=self.callback_queue ) self._listener.start() def _wait_for_result(self, correlation_id): """Waits for the result from the server. Waits for the result from the server, checks every second if a timeout occurred. If a timeout occurred - the `RpcTimeout` exception will be raised. """ try: return self._listener.get_result(correlation_id, self._timeout) except moves.queue.Empty: raise exc.MistralException( "RPC Request timeout, correlation_id = %s" % correlation_id ) def _call(self, ctx, method, target, async_=False, **kwargs): """Performs a remote call for the given method. :param ctx: authentication context associated with mistral :param method: name of the method that should be executed :param kwargs: keyword parameters for the remote-method :param target: Server name :param async: bool value means whether the request is asynchronous or not. :return: result of the method or None if async. """ correlation_id = utils.generate_unicode_uuid() body = { 'rpc_ctx': ctx.to_dict(), 'rpc_method': method, 'arguments': self._serialize_message(kwargs), 'async': async_ } LOG.debug("Publish request: %s", body) try: if not async_: self._listener.add_listener(correlation_id) # Publish request. for retry_round in six.moves.range(EPIPE_RETRIES): if self._publish_request(body, correlation_id): break # Start waiting for response. if async_: return LOG.debug( "Waiting a reply for sync call [reply_to = %s]", self.queue_name ) result = self._wait_for_result(correlation_id) res_type = result[kombu_base.TYPE] res_object = result[kombu_base.RESULT] if res_type == 'error': raise res_object else: res_object = self._deserialize_message(res_object)['body'] finally: if not async_: self._listener.remove_listener(correlation_id) return res_object def _publish_request(self, body, correlation_id): """Publishes the request message .. note:: The :const:`errno.EPIPE` socket errors are suppressed and result in False being returned. This is because this type of error can usually be fixed by retrying. :param body: message body :param correlation_id: correlation id :return: True if publish succeeded, False otherwise :rtype: bool """ try: conn = self._listener.wait_ready() if conn: with kombu.producers[conn].acquire(block=True) as producer: producer.publish( body=body, exchange=self.exchange, routing_key=self.topic, reply_to=self.queue_name, correlation_id=correlation_id, delivery_mode=2 ) return True except socket.error as e: if e.errno != errno.EPIPE: raise else: LOG.debug('Retrying publish due to broker connection failure') return False def sync_call(self, ctx, method, target=None, **kwargs): return self._call(ctx, method, async_=False, target=target, **kwargs) def async_call(self, ctx, method, target=None, **kwargs): return self._call(ctx, method, async_=True, target=target, **kwargs) mistral-6.0.0/mistral/rpc/kombu/base.py0000666000175100017510000001260013245513261020057 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import kombu import oslo_messaging as messaging from mistral import config as cfg from mistral import exceptions as exc from mistral import serialization as mistral_serialization from mistral.utils import rpc_utils IS_RECEIVED = 'kombu_rpc_is_received' RESULT = 'kombu_rpc_result' CORR_ID = 'kombu_rpc_correlation_id' TYPE = 'kombu_rpc_type' CONF = cfg.CONF def set_transport_options(check_backend=True): # We can be sure that all needed transport options are registered # only if we at least once called method get_transport(). Because # this is the method that registers them. messaging.get_transport(CONF) backend = rpc_utils.get_rpc_backend( messaging.TransportURL.parse(CONF, CONF.transport_url) ) if check_backend and backend not in ['rabbit', 'kombu']: raise exc.MistralException("Unsupported backend: %s" % backend) class Base(object): """Base class for Client and Server.""" def __init__(self): self.serializer = None @staticmethod def _make_connection(amqp_host, amqp_port, amqp_user, amqp_password, amqp_vhost): """Create connection. This method creates object representing the connection to RabbitMQ. :param amqp_host: Address of RabbitMQ server. :param amqp_user: Username for connecting to RabbitMQ. :param amqp_password: Password matching the given username. :param amqp_vhost: Virtual host to connect to. :param amqp_port: Port of RabbitMQ server. :return: New connection to RabbitMQ. """ return kombu.BrokerConnection( hostname=amqp_host, userid=amqp_user, password=amqp_password, virtual_host=amqp_vhost, port=amqp_port, transport_options={'confirm_publish': True} ) @staticmethod def _make_exchange(name, durable=False, auto_delete=True, exchange_type='topic'): """Make named exchange. This method creates object representing exchange on RabbitMQ. It would create a new exchange if exchange with given name don't exists. :param name: Name of the exchange. :param durable: If set to True, messages on this exchange would be store on disk - therefore can be retrieve after failure. :param auto_delete: If set to True, exchange would be automatically deleted when none is connected. :param exchange_type: Type of the exchange. Can be one of 'direct', 'topic', 'fanout', 'headers'. See Kombu docs for further details. :return: Kombu exchange object. """ return kombu.Exchange( name=name, type=exchange_type, durable=durable, auto_delete=auto_delete ) @staticmethod def _make_queue(name, exchange, routing_key='', durable=False, auto_delete=True, **kwargs): """Make named queue for a given exchange. This method creates object representing queue in RabbitMQ. It would create a new queue if queue with given name don't exists. :param name: Name of the queue :param exchange: Kombu Exchange object (can be created using _make_exchange). :param routing_key: Routing key for queue. It behaves differently depending on exchange type. See Kombu docs for further details. :param durable: If set to True, messages on this queue would be store on disk - therefore can be retrieve after failure. :param auto_delete: If set to True, queue would be automatically deleted when none is connected. :param kwargs: See kombu documentation for all parameters than may be may be passed to Queue. :return: Kombu Queue object. """ return kombu.Queue( name=name, routing_key=routing_key, exchange=exchange, durable=durable, auto_delete=auto_delete, **kwargs ) def _register_mistral_serialization(self): """Adds mistral serializer to available serializers in kombu.""" self.serializer = mistral_serialization.get_polymorphic_serializer() def _serialize_message(self, kwargs): result = {} for argname, arg in kwargs.items(): result[argname] = self.serializer.serialize(arg) return result def _deserialize_message(self, kwargs): result = {} for argname, arg in kwargs.items(): result[argname] = self.serializer.deserialize(arg) return result mistral-6.0.0/mistral/rpc/kombu/kombu_hosts.py0000666000175100017510000000333713245513261021511 0ustar zuulzuul00000000000000# Copyright (c) 2017 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import random import six import oslo_messaging as messaging class KombuHosts(object): def __init__(self, conf): self._conf = conf transport_url = messaging.TransportURL.parse( self._conf, self._conf.transport_url ) if transport_url.hosts: self._hosts = transport_url.hosts else: username = self._conf.oslo_messaging_rabbit.rabbit_userid password = self._conf.oslo_messaging_rabbit.rabbit_password self._hosts = [] for host in self._conf.oslo_messaging_rabbit.rabbit_hosts: hostname, port = host.split(':') self._hosts.append(messaging.TransportHost( hostname, port, username, password )) if len(self._hosts) > 1: random.shuffle(self._hosts) self._hosts_cycle = itertools.cycle(self._hosts) def get_host(self): return six.next(self._hosts_cycle) def get_hosts(self): return self._hosts mistral-6.0.0/mistral/rpc/kombu/__init__.py0000666000175100017510000000000013245513261020673 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/rpc/kombu/kombu_listener.py0000666000175100017510000000770013245513261022174 0ustar zuulzuul00000000000000# Copyright (c) 2016 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from kombu.mixins import ConsumerMixin import six import threading from oslo_log import log as logging from mistral.rpc.kombu import base as kombu_base LOG = logging.getLogger(__name__) class KombuRPCListener(ConsumerMixin): def __init__(self, connections, callback_queue): self._results = {} self._connections = itertools.cycle(connections) self._callback_queue = callback_queue self._thread = None self.connection = six.next(self._connections) self.ready = threading.Event() def add_listener(self, correlation_id): self._results[correlation_id] = six.moves.queue.Queue() def remove_listener(self, correlation_id): if correlation_id in self._results: del self._results[correlation_id] def get_consumers(self, Consumer, channel): consumers = [Consumer( self._callback_queue, callbacks=[self.on_message], accept=['pickle', 'json'] )] self.ready.set() return consumers def start(self): if self._thread is None: self._thread = threading.Thread(target=self.run) self._thread.daemon = True self._thread.start() def on_message(self, response, message): """Callback on response. This method is automatically called when a response is incoming and decides if it is the message we are waiting for - the message with the result. :param response: the body of the amqp message already deserialized by kombu :param message: the plain amqp kombu.message with additional information """ LOG.debug("Got response: {0}".format(response)) try: message.ack() except Exception as e: LOG.exception("Failed to acknowledge AMQP message: %s", e) else: LOG.debug("AMQP message acknowledged.") correlation_id = message.properties['correlation_id'] queue = self._results.get(correlation_id) if queue: result = { kombu_base.TYPE: 'error' if message.properties.get('type') == 'error' else None, kombu_base.RESULT: response } queue.put(result) else: LOG.debug( "Got a response, but seems like no process is waiting for" "it [correlation_id={0}]".format(correlation_id) ) def get_result(self, correlation_id, timeout): return self._results[correlation_id].get(block=True, timeout=timeout) def on_connection_error(self, exc, interval): self.ready.clear() self.connection = six.next(self._connections) LOG.debug("Broker connection failed: %s", exc) LOG.debug( "Sleeping for %s seconds, then retrying connection", interval ) def wait_ready(self, timeout=10.0): """Waits for the listener to successfully declare the consumer :param timeout: timeout for waiting in seconds :return: same as :func:`~threading.Event.wait` :rtype: bool """ if self.ready.wait(timeout=timeout): return self.connection else: return False mistral-6.0.0/mistral/api/0000775000175100017510000000000013245513604015443 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/hooks/0000775000175100017510000000000013245513604016566 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/hooks/content_type.py0000666000175100017510000000264113245513261021657 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from pecan import hooks class ContentTypeHook(hooks.PecanHook): def __init__(self, content_type, methods=['GET']): """Content type hook. This hook is needed for changing content type of responses but only for some HTTP methods. This is kind of 'hack' but it seems impossible using pecan/WSME to set different content types on request and response. :param content_type: Content-Type of the response. :type content_type: str :param methods: HTTP methods that should have response with given content_type. :type methods: list """ self.content_type = content_type self.methods = methods def after(self, state): if state.request.method in self.methods: state.response.content_type = self.content_type mistral-6.0.0/mistral/api/hooks/__init__.py0000666000175100017510000000000013245513261020666 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/app.py0000666000175100017510000000547013245513261016604 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg import oslo_middleware.cors as cors_middleware import osprofiler.web import pecan from mistral.api import access_control from mistral import config as m_config from mistral import context as ctx from mistral.db.v2 import api as db_api_v2 from mistral.rpc import base as rpc from mistral.service import coordination from mistral.services import periodic def get_pecan_config(): # Set up the pecan configuration. opts = cfg.CONF.pecan cfg_dict = { "app": { "root": opts.root, "modules": opts.modules, "debug": opts.debug, "auth_enable": opts.auth_enable } } return pecan.configuration.conf_from_dict(cfg_dict) def setup_app(config=None): if not config: config = get_pecan_config() m_config.set_config_defaults() app_conf = dict(config.app) db_api_v2.setup_db() # TODO(rakhmerov): Why do we run cron triggers in the API layer? # Should we move it to engine?s if cfg.CONF.cron_trigger.enabled: periodic.setup() coordination.Service('api_group').register_membership() app = pecan.make_app( app_conf.pop('root'), hooks=lambda: [ctx.AuthHook(), ctx.ContextHook()], logging=getattr(config, 'logging', {}), **app_conf ) # Set up access control. app = access_control.setup(app) # TODO(rakhmerov): need to get rid of this call. # Set up RPC related flags in config rpc.get_transport() # Set up profiler. if cfg.CONF.profiler.enabled: app = osprofiler.web.WsgiMiddleware( app, hmac_keys=cfg.CONF.profiler.hmac_keys, enabled=cfg.CONF.profiler.enabled ) # Create a CORS wrapper, and attach mistral-specific defaults that must be # included in all CORS responses. return cors_middleware.CORS(app, cfg.CONF) def init_wsgi(): # By default, oslo.config parses the CLI args if no args is provided. # As a result, invoking this wsgi script from gunicorn leads to the error # with argparse complaining that the CLI options have already been parsed. m_config.parse_args(args=[]) return setup_app() mistral-6.0.0/mistral/api/wsgi.py0000666000175100017510000000123513245513261016770 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.api import app application = app.init_wsgi() mistral-6.0.0/mistral/api/__init__.py0000666000175100017510000000000013245513261017543 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/controllers/0000775000175100017510000000000013245513604020011 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/controllers/v2/0000775000175100017510000000000013245513604020340 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/controllers/v2/validation.py0000666000175100017510000000222113245513261023042 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import pecan from pecan import rest from mistral import exceptions as exc class SpecValidationController(rest.RestController): def __init__(self, parser): super(SpecValidationController, self).__init__() self._parse_func = parser @pecan.expose('json') def post(self): """Validate a spec.""" definition = pecan.request.text try: self._parse_func(definition) except exc.DSLParsingException as e: return {'valid': False, 'error': str(e)} return {'valid': True} mistral-6.0.0/mistral/api/controllers/v2/resources.py0000666000175100017510000004653313245513272022742 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import wsme from wsme import types as wtypes from mistral.api.controllers import resource from mistral.api.controllers.v2 import types from mistral.workflow import states SCOPE_TYPES = wtypes.Enum(str, 'private', 'public') class Workbook(resource.Resource): """Workbook resource.""" id = wtypes.text name = wtypes.text definition = wtypes.text "workbook definition in Mistral v2 DSL" tags = [wtypes.text] scope = SCOPE_TYPES "'private' or 'public'" project_id = wsme.wsattr(wtypes.text, readonly=True) created_at = wtypes.text updated_at = wtypes.text @classmethod def sample(cls): return cls(id='123e4567-e89b-12d3-a456-426655440000', name='book', definition='HERE GOES' 'WORKBOOK DEFINITION IN MISTRAL DSL v2', tags=['large', 'expensive'], scope='private', project_id='a7eb669e9819420ea4bd1453e672c0a7', created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000') class Workbooks(resource.ResourceList): """A collection of Workbooks.""" workbooks = [Workbook] def __init__(self, **kwargs): self._type = 'workbooks' super(Workbooks, self).__init__(**kwargs) @classmethod def sample(cls): return cls(workbooks=[Workbook.sample()]) class Workflow(resource.Resource): """Workflow resource.""" id = wtypes.text name = wtypes.text namespace = wtypes.text input = wtypes.text definition = wtypes.text "Workflow definition in Mistral v2 DSL" tags = [wtypes.text] scope = SCOPE_TYPES "'private' or 'public'" project_id = wtypes.text created_at = wtypes.text updated_at = wtypes.text @classmethod def sample(cls): return cls(id='123e4567-e89b-12d3-a456-426655440000', name='flow', input='param1, param2', definition='HERE GOES' 'WORKFLOW DEFINITION IN MISTRAL DSL v2', tags=['large', 'expensive'], scope='private', project_id='a7eb669e9819420ea4bd1453e672c0a7', created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000', namespace='') @classmethod def _set_input(cls, obj, wf_spec): input_list = [] if wf_spec: input = wf_spec.get('input', []) for param in input: if isinstance(param, dict): for k, v in param.items(): input_list.append("%s=%s" % (k, v)) else: input_list.append(param) setattr(obj, 'input', ", ".join(input_list) if input_list else '') return obj @classmethod def from_dict(cls, d): obj = super(Workflow, cls).from_dict(d) return cls._set_input(obj, d.get('spec')) @classmethod def from_db_model(cls, db_model): obj = super(Workflow, cls).from_db_model(db_model) return cls._set_input(obj, db_model.spec) class Workflows(resource.ResourceList): """A collection of workflows.""" workflows = [Workflow] def __init__(self, **kwargs): self._type = 'workflows' super(Workflows, self).__init__(**kwargs) @classmethod def sample(cls): workflows_sample = cls() workflows_sample.workflows = [Workflow.sample()] workflows_sample.next = ("http://localhost:8989/v2/workflows?" "sort_keys=id,name&" "sort_dirs=asc,desc&limit=10&" "marker=123e4567-e89b-12d3-a456-426655440000") return workflows_sample class Action(resource.Resource): """Action resource. NOTE: *name* is immutable. Note that name and description get inferred from action definition when Mistral service receives a POST request. So they can't be changed in another way. """ id = wtypes.text name = wtypes.text is_system = bool input = wtypes.text description = wtypes.text tags = [wtypes.text] definition = wtypes.text scope = SCOPE_TYPES project_id = wsme.wsattr(wtypes.text, readonly=True) created_at = wtypes.text updated_at = wtypes.text @classmethod def sample(cls): return cls( id='123e4567-e89b-12d3-a456-426655440000', name='flow', definition='HERE GOES ACTION DEFINITION IN MISTRAL DSL v2', tags=['large', 'expensive'], scope='private', project_id='a7eb669e9819420ea4bd1453e672c0a7', created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000' ) class Actions(resource.ResourceList): """A collection of Actions.""" actions = [Action] def __init__(self, **kwargs): self._type = 'actions' super(Actions, self).__init__(**kwargs) @classmethod def sample(cls): sample = cls() sample.actions = [Action.sample()] sample.next = ( "http://localhost:8989/v2/actions?sort_keys=id,name&" "sort_dirs=asc,desc&limit=10&" "marker=123e4567-e89b-12d3-a456-426655440000" ) return sample class Execution(resource.Resource): """Execution resource.""" id = wtypes.text "execution ID. It is immutable and auto assigned or determined by the API " "client on execution creation. " "If it's passed to POST method from a client it'll be assigned to the " "newly created execution object, but only if an execution with such ID " "doesn't exist. If it exists, then the endpoint will just return " "execution properties in JSON." workflow_id = wtypes.text "workflow ID" workflow_name = wtypes.text "workflow name" workflow_namespace = wtypes.text """Workflow namespace. The workflow namespace is also saved under params and passed to all sub-workflow executions. When looking for the next sub-workflow to run, The correct workflow will be found by name and namespace, where the namespace can be the workflow namespace or the default namespace. Workflows in the same namespace as the top workflow will be given a higher priority.""" description = wtypes.text "description of workflow execution" params = types.jsontype """'params' define workflow type specific parameters. Specific parameters are: 'task_name' - the name of the target task. Only for reverse workflows. 'env' - A string value containing the name of the stored environment object or a dictionary with the environment variables used during workflow execution and accessible as 'env()' from within expressions (YAQL or Jinja) defined in the workflow text. 'evaluate_env' - If present, controls whether or not Mistral should recursively find and evaluate all expressions (YAQL or Jinja) within the specified environment (via 'env' parameter). 'True' - evaluate all expressions recursively in the environment structure. 'False' - don't evaluate expressions. 'True' by default. """ task_execution_id = wtypes.text "reference to the parent task execution" root_execution_id = wtypes.text "reference to the root execution" source_execution_id = wtypes.text """reference to a workflow execution id which will signal the api to perform a lookup of a current workflow_execution and create a replica based on that workflow inputs and parameters""" state = wtypes.text "state can be one of: IDLE, RUNNING, SUCCESS, ERROR, PAUSED" state_info = wtypes.text "an optional state information string" input = types.jsontype "input is a JSON structure containing workflow input values" output = types.jsontype "output is a workflow output" created_at = wtypes.text updated_at = wtypes.text project_id = wsme.wsattr(wtypes.text, readonly=True) @classmethod def sample(cls): return cls(id='123e4567-e89b-12d3-a456-426655440000', workflow_name='flow', workflow_namespace='some_namespace', workflow_id='123e4567-e89b-12d3-a456-426655441111', description='this is the first execution.', project_id='40a908dbddfe48ad80a87fb30fa70a03', state='SUCCESS', input={}, output={}, params={'env': {'k1': 'abc', 'k2': 123}}, created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000') class Executions(resource.ResourceList): """A collection of Execution resources.""" executions = [Execution] def __init__(self, **kwargs): self._type = 'executions' super(Executions, self).__init__(**kwargs) @classmethod def sample(cls): sample = cls() sample.executions = [Execution.sample()] sample.next = ( "http://localhost:8989/v2/executions?" "sort_keys=id,workflow_name&sort_dirs=asc,desc&limit=10&" "marker=123e4567-e89b-12d3-a456-426655440000" ) return sample class Task(resource.Resource): """Task resource.""" id = wtypes.text name = wtypes.text type = wtypes.text workflow_name = wtypes.text workflow_namespace = wtypes.text workflow_id = wtypes.text workflow_execution_id = wtypes.text state = wtypes.text """state can take one of the following values: IDLE, RUNNING, SUCCESS, ERROR, DELAYED""" state_info = wtypes.text "an optional state information string" project_id = wsme.wsattr(wtypes.text, readonly=True) runtime_context = types.jsontype result = wtypes.text published = types.jsontype processed = bool created_at = wtypes.text updated_at = wtypes.text # Add this param to make Mistral API work with WSME 0.8.0 or higher version reset = wsme.wsattr(bool, mandatory=True) env = types.jsontype @classmethod def sample(cls): return cls( id='123e4567-e89b-12d3-a456-426655440000', workflow_name='flow', workflow_id='123e4567-e89b-12d3-a456-426655441111', workflow_execution_id='123e4567-e89b-12d3-a456-426655440000', name='task', state=states.SUCCESS, project_id='40a908dbddfe48ad80a87fb30fa70a03', runtime_context={ 'triggered_by': [ { 'task_id': '123-123-123', 'event': 'on-success' } ] }, result='task result', published={'key': 'value'}, processed=True, created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000', reset=True ) class Tasks(resource.ResourceList): """A collection of tasks.""" tasks = [Task] def __init__(self, **kwargs): self._type = 'tasks' super(Tasks, self).__init__(**kwargs) @classmethod def sample(cls): return cls(tasks=[Task.sample()]) class ActionExecution(resource.Resource): """ActionExecution resource.""" id = wtypes.text workflow_name = wtypes.text workflow_namespace = wtypes.text task_name = wtypes.text task_execution_id = wtypes.text state = wtypes.text state_info = wtypes.text tags = [wtypes.text] name = wtypes.text description = wtypes.text project_id = wsme.wsattr(wtypes.text, readonly=True) accepted = bool input = types.jsontype output = types.jsontype created_at = wtypes.text updated_at = wtypes.text params = types.jsontype # TODO(rakhmerov): What is this?? @classmethod def sample(cls): return cls( id='123e4567-e89b-12d3-a456-426655440000', workflow_name='flow', task_name='task1', workflow_execution_id='653e4127-e89b-12d3-a456-426655440076', task_execution_id='343e45623-e89b-12d3-a456-426655440090', state=states.SUCCESS, state_info=states.SUCCESS, tags=['foo', 'fee'], name='std.echo', description='My running action', project_id='40a908dbddfe48ad80a87fb30fa70a03', accepted=True, input={'first_name': 'John', 'last_name': 'Doe'}, output={'some_output': 'Hello, John Doe!'}, created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000', params={'save_result': True, "run_sync": False} ) class ActionExecutions(resource.ResourceList): """A collection of action_executions.""" action_executions = [ActionExecution] def __init__(self, **kwargs): self._type = 'action_executions' super(ActionExecutions, self).__init__(**kwargs) @classmethod def sample(cls): return cls(action_executions=[ActionExecution.sample()]) class CronTrigger(resource.Resource): """CronTrigger resource.""" id = wtypes.text name = wtypes.text workflow_name = wtypes.text workflow_id = wtypes.text workflow_input = types.jsontype workflow_params = types.jsontype project_id = wsme.wsattr(wtypes.text, readonly=True) scope = SCOPE_TYPES pattern = wtypes.text remaining_executions = wtypes.IntegerType(minimum=1) first_execution_time = wtypes.text next_execution_time = wtypes.text created_at = wtypes.text updated_at = wtypes.text @classmethod def sample(cls): return cls( id='123e4567-e89b-12d3-a456-426655440000', name='my_trigger', workflow_name='my_wf', workflow_id='123e4567-e89b-12d3-a456-426655441111', workflow_input={}, workflow_params={}, project_id='40a908dbddfe48ad80a87fb30fa70a03', scope='private', pattern='* * * * *', remaining_executions=42, created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000' ) class CronTriggers(resource.ResourceList): """A collection of cron triggers.""" cron_triggers = [CronTrigger] def __init__(self, **kwargs): self._type = 'cron_triggers' super(CronTriggers, self).__init__(**kwargs) @classmethod def sample(cls): return cls(cron_triggers=[CronTrigger.sample()]) class Environment(resource.Resource): """Environment resource.""" id = wtypes.text name = wtypes.text description = wtypes.text variables = types.jsontype scope = SCOPE_TYPES project_id = wsme.wsattr(wtypes.text, readonly=True) created_at = wtypes.text updated_at = wtypes.text @classmethod def sample(cls): return cls( id='123e4567-e89b-12d3-a456-426655440000', name='sample', description='example environment entry', variables={ 'server': 'localhost', 'database': 'temp', 'timeout': 600, 'verbose': True }, scope='private', project_id='40a908dbddfe48ad80a87fb30fa70a03', created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000' ) class Environments(resource.ResourceList): """A collection of Environment resources.""" environments = [Environment] def __init__(self, **kwargs): self._type = 'environments' super(Environments, self).__init__(**kwargs) @classmethod def sample(cls): return cls(environments=[Environment.sample()]) class Member(resource.Resource): id = types.uuid resource_id = wtypes.text resource_type = wtypes.text project_id = wtypes.text member_id = wtypes.text status = wtypes.Enum(str, 'pending', 'accepted', 'rejected') created_at = wtypes.text updated_at = wtypes.text @classmethod def sample(cls): return cls( id='123e4567-e89b-12d3-a456-426655440000', resource_id='123e4567-e89b-12d3-a456-426655440011', resource_type='workflow', project_id='40a908dbddfe48ad80a87fb30fa70a03', member_id='a7eb669e9819420ea4bd1453e672c0a7', status='accepted', created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000' ) class Members(resource.ResourceList): members = [Member] @classmethod def sample(cls): return cls(members=[Member.sample()]) class Service(resource.Resource): """Service resource.""" name = wtypes.text type = wtypes.text @classmethod def sample(cls): return cls(name='host1_1234', type='executor_group') class Services(resource.Resource): """A collection of Services.""" services = [Service] @classmethod def sample(cls): return cls(services=[Service.sample()]) class EventTrigger(resource.Resource): """EventTrigger resource.""" id = wsme.wsattr(wtypes.text, readonly=True) created_at = wsme.wsattr(wtypes.text, readonly=True) updated_at = wsme.wsattr(wtypes.text, readonly=True) project_id = wsme.wsattr(wtypes.text, readonly=True) name = wtypes.text workflow_id = types.uuid workflow_input = types.jsontype workflow_params = types.jsontype exchange = wtypes.text topic = wtypes.text event = wtypes.text scope = SCOPE_TYPES @classmethod def sample(cls): return cls(id='123e4567-e89b-12d3-a456-426655441414', created_at='1970-01-01T00:00:00.000000', updated_at='1970-01-01T00:00:00.000000', project_id='project', name='expiration_event_trigger', workflow_id='123e4567-e89b-12d3-a456-426655441414', workflow_input={}, workflow_params={}, exchange='nova', topic='notifications', event='compute.instance.create.end') class EventTriggers(resource.ResourceList): """A collection of event triggers.""" event_triggers = [EventTrigger] def __init__(self, **kwargs): self._type = 'event_triggers' super(EventTriggers, self).__init__(**kwargs) @classmethod def sample(cls): triggers_sample = cls() triggers_sample.event_triggers = [EventTrigger.sample()] triggers_sample.next = ("http://localhost:8989/v2/event_triggers?" "sort_keys=id,name&" "sort_dirs=asc,desc&limit=10&" "marker=123e4567-e89b-12d3-a456-426655440000") return triggers_sample mistral-6.0.0/mistral/api/controllers/v2/action.py0000666000175100017510000002110413245513261022166 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging import pecan from pecan import hooks from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral.api.controllers.v2 import validation from mistral.api.hooks import content_type as ct_hook from mistral import context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import actions from mistral.utils import filter_utils from mistral.utils import rest_utils LOG = logging.getLogger(__name__) class ActionsController(rest.RestController, hooks.HookController): # TODO(nmakhotkin): Have a discussion with pecan/WSME folks in order # to have requests and response of different content types. Then # delete ContentTypeHook. __hooks__ = [ct_hook.ContentTypeHook("application/json", ['POST', 'PUT'])] validate = validation.SpecValidationController( spec_parser.get_action_list_spec_from_yaml) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Action, wtypes.text) def get(self, identifier): """Return the named action. :param identifier: ID or name of the Action to get. """ acl.enforce('actions:get', context.ctx()) LOG.debug("Fetch action [identifier=%s]", identifier) # Use retries to prevent possible failures. r = rest_utils.create_db_retry_object() db_model = r.call(db_api.get_action_definition, identifier) return resources.Action.from_db_model(db_model) @rest_utils.wrap_pecan_controller_exception @pecan.expose(content_type="text/plain") def put(self, identifier=None): """Update one or more actions. :param identifier: Optional. If provided, it's UUID or name of an action. Only one action can be updated with identifier param. NOTE: This text is allowed to have definitions of multiple actions. In this case they all will be updated. """ acl.enforce('actions:update', context.ctx()) definition = pecan.request.text LOG.debug("Update action(s) [definition=%s]", definition) scope = pecan.request.GET.get('scope', 'private') if scope not in resources.SCOPE_TYPES.values: raise exc.InvalidModelException( "Scope must be one of the following: %s; actual: " "%s" % (resources.SCOPE_TYPES.values, scope) ) with db_api.transaction(): db_acts = actions.update_actions( definition, scope=scope, identifier=identifier ) action_list = [ resources.Action.from_db_model(db_act) for db_act in db_acts ] return resources.Actions(actions=action_list).to_json() @rest_utils.wrap_pecan_controller_exception @pecan.expose(content_type="text/plain") def post(self): """Create a new action. NOTE: This text is allowed to have definitions of multiple actions. In this case they all will be created. """ acl.enforce('actions:create', context.ctx()) definition = pecan.request.text scope = pecan.request.GET.get('scope', 'private') pecan.response.status = 201 if scope not in resources.SCOPE_TYPES.values: raise exc.InvalidModelException( "Scope must be one of the following: %s; actual: " "%s" % (resources.SCOPE_TYPES.values, scope) ) LOG.debug("Create action(s) [definition=%s]", definition) with db_api.transaction(): db_acts = actions.create_actions(definition, scope=scope) action_list = [ resources.Action.from_db_model(db_act) for db_act in db_acts ] return resources.Actions(actions=action_list).to_json() @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, identifier): """Delete the named action. :param identifier: Name or UUID of the action to delete. """ acl.enforce('actions:delete', context.ctx()) LOG.debug("Delete action [identifier=%s]", identifier) with db_api.transaction(): db_model = db_api.get_action_definition(identifier) if db_model.is_system: msg = "Attempt to delete a system action: %s" % identifier raise exc.DataAccessException(msg) db_api.delete_action_definition(identifier) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Actions, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, resources.SCOPE_TYPES, wtypes.text, wtypes.text, wtypes.text, wtypes.text, wtypes.text, wtypes.text) def get_all(self, marker=None, limit=None, sort_keys='name', sort_dirs='asc', fields='', created_at=None, name=None, scope=None, tags=None, updated_at=None, description=None, definition=None, is_system=None, input=None): """Return all actions. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: name. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: asc. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param scope: Optional. Keep only resources with a specific scope. :param definition: Optional. Keep only resources with a specific definition. :param is_system: Optional. Keep only system actions or ad-hoc actions (if False). :param input: Optional. Keep only resources with a specific input. :param description: Optional. Keep only resources with a specific description. :param tags: Optional. Keep only resources containing specific tags. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. """ acl.enforce('actions:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, name=name, scope=scope, tags=tags, updated_at=updated_at, description=description, definition=definition, is_system=is_system, input=input ) LOG.debug("Fetch actions. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, filters=%s", marker, limit, sort_keys, sort_dirs, filters) return rest_utils.get_all( resources.Actions, resources.Action, db_api.get_action_definitions, db_api.get_action_definition_by_id, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **filters ) mistral-6.0.0/mistral/api/controllers/v2/execution.py0000666000175100017510000003420413245513272022723 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2015 Huawei Technologies Co., Ltd. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from pecan import rest import sqlalchemy as sa import tenacity from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import task from mistral.api.controllers.v2 import types from mistral import context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.rpc import clients as rpc from mistral.services import workflows as wf_service from mistral.utils import filter_utils from mistral.utils import merge_dicts from mistral.utils import rest_utils from mistral.workflow import states LOG = logging.getLogger(__name__) STATE_TYPES = wtypes.Enum( str, states.IDLE, states.RUNNING, states.SUCCESS, states.ERROR, states.PAUSED, states.CANCELLED ) def _load_deferred_output_field(ex): if ex: # We need to refer to this lazy-load field explicitly in # order to make sure that it is correctly loaded. hasattr(ex, 'output') return ex def _get_workflow_execution_resource(wf_ex): _load_deferred_output_field(wf_ex) return resources.Execution.from_db_model(wf_ex) # Use retries to prevent possible failures. @tenacity.retry( retry=tenacity.retry_if_exception_type(sa.exc.OperationalError), stop=tenacity.stop_after_attempt(10), wait=tenacity.wait_incrementing(increment=100) # 0.1 seconds ) def _get_workflow_execution(id, must_exist=True): with db_api.transaction(): if must_exist: wf_ex = db_api.get_workflow_execution(id) else: wf_ex = db_api.load_workflow_execution(id) return _load_deferred_output_field(wf_ex) # TODO(rakhmerov): Make sure to make all needed renaming on public API. class ExecutionsController(rest.RestController): tasks = task.ExecutionTasksController() @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Execution, wtypes.text) def get(self, id): """Return the specified Execution. :param id: UUID of execution to retrieve. """ acl.enforce("executions:get", context.ctx()) LOG.debug("Fetch execution [id=%s]", id) wf_ex = _get_workflow_execution(id) return resources.Execution.from_db_model(wf_ex) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose( resources.Execution, wtypes.text, body=resources.Execution ) def put(self, id, wf_ex): """Update the specified workflow execution. :param id: UUID of execution to update. :param wf_ex: Execution object. """ acl.enforce('executions:update', context.ctx()) LOG.debug('Update execution [id=%s, execution=%s]', id, wf_ex) with db_api.transaction(): # ensure that workflow execution exists db_api.get_workflow_execution(id) delta = {} if wf_ex.state: delta['state'] = wf_ex.state if wf_ex.description: delta['description'] = wf_ex.description if wf_ex.params and wf_ex.params.get('env'): delta['env'] = wf_ex.params.get('env') # Currently we can change only state, description, or env. if len(delta.values()) <= 0: raise exc.InputException( 'The property state, description, or env ' 'is not provided for update.' ) # Description cannot be updated together with state. if delta.get('description') and delta.get('state'): raise exc.InputException( 'The property description must be updated ' 'separately from state.' ) # If state change, environment cannot be updated if not RUNNING. if (delta.get('env') and delta.get('state') and delta['state'] != states.RUNNING): raise exc.InputException( 'The property env can only be updated when workflow ' 'execution is not running or on resume from pause.' ) if delta.get('description'): wf_ex = db_api.update_workflow_execution( id, {'description': delta['description']} ) if not delta.get('state') and delta.get('env'): wf_ex = db_api.get_workflow_execution(id) wf_ex = wf_service.update_workflow_execution_env( wf_ex, delta.get('env') ) if delta.get('state'): if states.is_paused(delta.get('state')): wf_ex = rpc.get_engine_client().pause_workflow(id) elif delta.get('state') == states.RUNNING: wf_ex = rpc.get_engine_client().resume_workflow( id, env=delta.get('env') ) elif states.is_completed(delta.get('state')): msg = wf_ex.state_info if wf_ex.state_info else None wf_ex = rpc.get_engine_client().stop_workflow( id, delta.get('state'), msg ) else: # To prevent changing state in other cases throw a message. raise exc.InputException( "Cannot change state to %s. Allowed states are: '%s" % ( wf_ex.state, ', '.join([ states.RUNNING, states.PAUSED, states.SUCCESS, states.ERROR, states.CANCELLED ]) ) ) return resources.Execution.from_dict( wf_ex if isinstance(wf_ex, dict) else wf_ex.to_dict() ) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose( resources.Execution, body=resources.Execution, status_code=201 ) def post(self, wf_ex): """Create a new Execution. :param wf_ex: Execution object with input content. """ acl.enforce('executions:create', context.ctx()) LOG.debug("Create execution [execution=%s]", wf_ex) exec_dict = wf_ex.to_dict() exec_id = exec_dict.get('id') source_execution_id = exec_dict.get('source_execution_id') source_exec_dict = None if exec_id: # If ID is present we need to check if such execution exists. # If yes, the method just returns the object. If not, the ID # will be used to create a new execution. wf_ex = _get_workflow_execution(exec_id, must_exist=False) if wf_ex: return resources.Execution.from_db_model(wf_ex) if source_execution_id: # If source execution is present we will perform a lookup for # previous workflow execution model and the information to start # a new workflow based on that information. source_exec_dict = db_api.get_workflow_execution( source_execution_id).to_dict() result_exec_dict = merge_dicts(source_exec_dict, exec_dict) if not (result_exec_dict.get('workflow_id') or result_exec_dict.get('workflow_name')): raise exc.WorkflowException( "Workflow ID or workflow name must be provided. Workflow ID is" " recommended." ) engine = rpc.get_engine_client() result = engine.start_workflow( result_exec_dict.get('workflow_id', result_exec_dict.get('workflow_name')), result_exec_dict.get('workflow_namespace', ''), exec_id, result_exec_dict.get('input'), result_exec_dict.get('description', ''), **result_exec_dict.get('params') or {} ) return resources.Execution.from_dict(result) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, id): """Delete the specified Execution. :param id: UUID of execution to delete. """ acl.enforce('executions:delete', context.ctx()) LOG.debug("Delete execution [id=%s]", id) return db_api.delete_workflow_execution(id) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Executions, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, types.uuid, wtypes.text, types.jsontype, types.uuid, types.uuid, STATE_TYPES, wtypes.text, types.jsontype, types.jsontype, wtypes.text, wtypes.text, bool, types.uuid, bool) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', workflow_name=None, workflow_id=None, description=None, params=None, task_execution_id=None, root_execution_id=None, state=None, state_info=None, input=None, output=None, created_at=None, updated_at=None, include_output=None, project_id=None, all_projects=False): """Return all Executions. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param workflow_name: Optional. Keep only resources with a specific workflow name. :param workflow_id: Optional. Keep only resources with a specific workflow ID. :param description: Optional. Keep only resources with a specific description. :param params: Optional. Keep only resources with specific parameters. :param task_execution_id: Optional. Keep only resources with a specific task execution ID. :param root_execution_id: Optional. Keep only resources with a specific root execution ID. :param state: Optional. Keep only resources with a specific state. :param state_info: Optional. Keep only resources with specific state information. :param input: Optional. Keep only resources with a specific input. :param output: Optional. Keep only resources with a specific output. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. :param include_output: Optional. Include the output for all executions in the list. :param project_id: Optional. Only get exectuions belong to the project. Admin required. :param all_projects: Optional. Get resources of all projects. Admin required. """ acl.enforce('executions:list', context.ctx()) if all_projects or project_id: acl.enforce('executions:list:all_projects', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, workflow_name=workflow_name, workflow_id=workflow_id, params=params, task_execution_id=task_execution_id, state=state, state_info=state_info, input=input, output=output, updated_at=updated_at, description=description, project_id=project_id, root_execution_id=root_execution_id, ) LOG.debug( "Fetch executions. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, filters=%s, all_projects=%s", marker, limit, sort_keys, sort_dirs, filters, all_projects ) if include_output: resource_function = _get_workflow_execution_resource else: resource_function = None return rest_utils.get_all( resources.Executions, resources.Execution, db_api.get_workflow_executions, db_api.get_workflow_execution, resource_function=resource_function, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, all_projects=all_projects, **filters ) mistral-6.0.0/mistral/api/controllers/v2/__init__.py0000666000175100017510000000000013245513261022440 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/controllers/v2/environment.py0000666000175100017510000001614013245513261023261 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from oslo_log import log as logging from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral import context from mistral.db.v2 import api as db_api from mistral import exceptions from mistral.utils import cut from mistral.utils import filter_utils from mistral.utils import rest_utils LOG = logging.getLogger(__name__) class EnvironmentController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Environments, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, types.jsontype, resources.SCOPE_TYPES, wtypes.text, wtypes.text) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', name=None, description=None, variables=None, scope=None, created_at=None, updated_at=None): """Return all environments. Where project_id is the same as the requester or project_id is different but the scope is public. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param description: Optional. Keep only resources with a specific description. :param variables: Optional. Keep only resources with specific variables. :param scope: Optional. Keep only resources with a specific scope. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. """ acl.enforce('environments:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, name=name, updated_at=updated_at, description=description, variables=variables, scope=scope ) LOG.debug("Fetch environments. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, filters=%s", marker, limit, sort_keys, sort_dirs, filters) return rest_utils.get_all( resources.Environments, resources.Environment, db_api.get_environments, db_api.get_environment, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **filters ) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Environment, wtypes.text) def get(self, name): """Return the named environment. :param name: Name of environment to retrieve """ acl.enforce('environments:get', context.ctx()) LOG.debug("Fetch environment [name=%s]", name) # Use retries to prevent possible failures. r = rest_utils.create_db_retry_object() db_model = r.call(db_api.get_environment, name) return resources.Environment.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose( resources.Environment, body=resources.Environment, status_code=201 ) def post(self, env): """Create a new environment. :param env: Required. Environment structure to create """ acl.enforce('environments:create', context.ctx()) LOG.debug("Create environment [env=%s]", cut(env)) self._validate_environment( json.loads(wsme_pecan.pecan.request.body.decode()), ['name', 'description', 'variables'] ) db_model = db_api.create_environment(env.to_dict()) return resources.Environment.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Environment, body=resources.Environment) def put(self, env): """Update an environment. :param env: Required. Environment structure to update """ acl.enforce('environments:update', context.ctx()) if not env.name: raise exceptions.InputException( 'Name of the environment is not provided.' ) LOG.debug("Update environment [name=%s, env=%s]", env.name, cut(env)) definition = json.loads(wsme_pecan.pecan.request.body.decode()) definition.pop('name') self._validate_environment( definition, ['description', 'variables', 'scope'] ) db_model = db_api.update_environment(env.name, env.to_dict()) return resources.Environment.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, name): """Delete the named environment. :param name: Name of environment to delete """ acl.enforce('environments:delete', context.ctx()) LOG.debug("Delete environment [name=%s]", name) db_api.delete_environment(name) @staticmethod def _validate_environment(env_dict, legal_keys): if env_dict is None: return if set(env_dict) - set(legal_keys): raise exceptions.InputException( "Please, check your environment definition. Only: " "%s are allowed as definition keys." % legal_keys ) mistral-6.0.0/mistral/api/controllers/v2/cron_trigger.py0000666000175100017510000001763113245513261023407 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral import context from mistral.db.v2 import api as db_api from mistral.services import triggers from mistral.utils import filter_utils from mistral.utils import rest_utils LOG = logging.getLogger(__name__) class CronTriggersController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.CronTrigger, wtypes.text) def get(self, identifier): """Returns the named cron_trigger. :param identifier: Id or name of cron trigger to retrieve """ acl.enforce('cron_triggers:get', context.ctx()) LOG.debug('Fetch cron trigger [identifier=%s]', identifier) # Use retries to prevent possible failures. r = rest_utils.create_db_retry_object() db_model = r.call(db_api.get_cron_trigger, identifier) return resources.CronTrigger.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose( resources.CronTrigger, body=resources.CronTrigger, status_code=201 ) def post(self, cron_trigger): """Creates a new cron trigger. :param cron_trigger: Required. Cron trigger structure. """ acl.enforce('cron_triggers:create', context.ctx()) LOG.debug('Create cron trigger: %s', cron_trigger) values = cron_trigger.to_dict() db_model = triggers.create_cron_trigger( values['name'], values.get('workflow_name'), values.get('workflow_input'), values.get('workflow_params'), values.get('pattern'), values.get('first_execution_time'), values.get('remaining_executions'), workflow_id=values.get('workflow_id') ) return resources.CronTrigger.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, identifier): """Delete cron trigger. :param identifier: Id or name of cron trigger to delete """ acl.enforce('cron_triggers:delete', context.ctx()) LOG.debug("Delete cron trigger [identifier=%s]", identifier) triggers.delete_cron_trigger(identifier) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.CronTriggers, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, types.uuid, types.jsontype, types.jsontype, resources.SCOPE_TYPES, wtypes.text, wtypes.IntegerType(minimum=1), wtypes.text, wtypes.text, wtypes.text, wtypes.text, types.uuid, bool) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', name=None, workflow_name=None, workflow_id=None, workflow_input=None, workflow_params=None, scope=None, pattern=None, remaining_executions=None, first_execution_time=None, next_execution_time=None, created_at=None, updated_at=None, project_id=None, all_projects=False): """Return all cron triggers. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param workflow_name: Optional. Keep only resources with a specific workflow name. :param workflow_id: Optional. Keep only resources with a specific workflow ID. :param workflow_input: Optional. Keep only resources with a specific workflow input. :param workflow_params: Optional. Keep only resources with specific workflow parameters. :param scope: Optional. Keep only resources with a specific scope. :param pattern: Optional. Keep only resources with a specific pattern. :param remaining_executions: Optional. Keep only resources with a specific number of remaining executions. :param project_id: Optional. Keep only resources with the specific project id. :param first_execution_time: Optional. Keep only resources with a specific time and date of first execution. :param next_execution_time: Optional. Keep only resources with a specific time and date of next execution. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. :param all_projects: Optional. Get resources of all projects. """ acl.enforce('cron_triggers:list', context.ctx()) if all_projects: acl.enforce('cron_triggers:list:all_projects', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, name=name, updated_at=updated_at, workflow_name=workflow_name, workflow_id=workflow_id, workflow_input=workflow_input, workflow_params=workflow_params, scope=scope, pattern=pattern, remaining_executions=remaining_executions, first_execution_time=first_execution_time, next_execution_time=next_execution_time, project_id=project_id, ) LOG.debug( "Fetch cron triggers. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, filters=%s, all_projects=%s", marker, limit, sort_keys, sort_dirs, filters, all_projects ) return rest_utils.get_all( resources.CronTriggers, resources.CronTrigger, db_api.get_cron_triggers, db_api.get_cron_trigger, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, all_projects=all_projects, **filters ) mistral-6.0.0/mistral/api/controllers/v2/workflow.py0000666000175100017510000002503013245513261022565 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from oslo_utils import uuidutils import pecan from pecan import hooks from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import member from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral.api.controllers.v2 import validation from mistral.api.hooks import content_type as ct_hook from mistral import context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import workflows from mistral.utils import filter_utils from mistral.utils import rest_utils LOG = logging.getLogger(__name__) class WorkflowsController(rest.RestController, hooks.HookController): # TODO(nmakhotkin): Have a discussion with pecan/WSME folks in order # to have requests and response of different content types. Then # delete ContentTypeHook. __hooks__ = [ct_hook.ContentTypeHook("application/json", ['POST', 'PUT'])] validate = validation.SpecValidationController( spec_parser.get_workflow_list_spec_from_yaml) @pecan.expose() def _lookup(self, identifier, sub_resource, *remainder): LOG.debug( "Lookup subcontrollers of WorkflowsController, " "sub_resource: %s, remainder: %s.", sub_resource, remainder ) if sub_resource == 'members': if not uuidutils.is_uuid_like(identifier): raise exc.WorkflowException( "Only support UUID as resource identifier in resource " "sharing feature." ) # We don't check workflow's existence here, since a user may query # members of a workflow, which doesn't belong to him/her. return member.MembersController('workflow', identifier), remainder return super(WorkflowsController, self)._lookup( identifier, sub_resource, *remainder ) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Workflow, wtypes.text, wtypes.text) def get(self, identifier, namespace=''): """Return the named workflow. :param identifier: Name or UUID of the workflow to retrieve. :param namespace: Optional. Namespace of the workflow to retrieve. """ acl.enforce('workflows:get', context.ctx()) LOG.debug("Fetch workflow [identifier=%s]", identifier) # Use retries to prevent possible failures. r = rest_utils.create_db_retry_object() db_model = r.call( db_api.get_workflow_definition, identifier, namespace=namespace ) return resources.Workflow.from_db_model(db_model) @rest_utils.wrap_pecan_controller_exception @pecan.expose(content_type="text/plain") def put(self, identifier=None, namespace=''): """Update one or more workflows. :param identifier: Optional. If provided, it's UUID of a workflow. Only one workflow can be updated with identifier param. :param namespace: Optional. If provided int's the namespace of the workflow/workflows. currently namespace cannot be changed. The text is allowed to have definitions of multiple workflows. In this case they all will be updated. """ acl.enforce('workflows:update', context.ctx()) definition = pecan.request.text scope = pecan.request.GET.get('scope', 'private') if scope not in resources.SCOPE_TYPES.values: raise exc.InvalidModelException( "Scope must be one of the following: %s; actual: " "%s" % (resources.SCOPE_TYPES.values, scope) ) LOG.debug("Update workflow(s) [definition=%s]", definition) db_wfs = workflows.update_workflows( definition, scope=scope, identifier=identifier, namespace=namespace ) workflow_list = [ resources.Workflow.from_db_model(db_wf) for db_wf in db_wfs ] return (workflow_list[0].to_json() if identifier else resources.Workflows(workflows=workflow_list).to_json()) @rest_utils.wrap_pecan_controller_exception @pecan.expose(content_type="text/plain") def post(self, namespace=''): """Create a new workflow. NOTE: The text is allowed to have definitions of multiple workflows. In this case they all will be created. :param namespace: Optional. The namespace to create the workflow in. Workflows with the same name can be added to a given project if are in two different namespaces. """ acl.enforce('workflows:create', context.ctx()) definition = pecan.request.text scope = pecan.request.GET.get('scope', 'private') pecan.response.status = 201 if scope not in resources.SCOPE_TYPES.values: raise exc.InvalidModelException( "Scope must be one of the following: %s; actual: " "%s" % (resources.SCOPE_TYPES.values, scope) ) LOG.debug("Create workflow(s) [definition=%s]", definition) db_wfs = workflows.create_workflows( definition, scope=scope, namespace=namespace ) workflow_list = [ resources.Workflow.from_db_model(db_wf) for db_wf in db_wfs ] return resources.Workflows(workflows=workflow_list).to_json() @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, wtypes.text, wtypes.text, status_code=204) def delete(self, identifier, namespace=''): """Delete a workflow. :param identifier: Name or ID of workflow to delete. :param namespace: Optional. Namespace of the workflow to delete. """ acl.enforce('workflows:delete', context.ctx()) LOG.debug("Delete workflow [identifier=%s, namespace=%s]", identifier, namespace) with db_api.transaction(): db_api.delete_workflow_definition(identifier, namespace) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Workflows, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, wtypes.text, wtypes.text, resources.SCOPE_TYPES, types.uuid, wtypes.text, wtypes.text, bool, wtypes.text) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', name=None, input=None, definition=None, tags=None, scope=None, project_id=None, created_at=None, updated_at=None, all_projects=False, namespace=None): """Return a list of workflows. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: asc. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param namespace: Optional. Keep only resources with a specific namespace :param input: Optional. Keep only resources with a specific input. :param definition: Optional. Keep only resources with a specific definition. :param tags: Optional. Keep only resources containing specific tags. :param scope: Optional. Keep only resources with a specific scope. :param project_id: Optional. The same as the requester project_id or different if the scope is public. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. :param all_projects: Optional. Get resources of all projects. """ acl.enforce('workflows:list', context.ctx()) if all_projects: acl.enforce('workflows:list:all_projects', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, name=name, scope=scope, tags=tags, updated_at=updated_at, input=input, definition=definition, project_id=project_id, namespace=namespace ) LOG.debug("Fetch workflows. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, fields=%s, filters=%s, all_projects=%s", marker, limit, sort_keys, sort_dirs, fields, filters, all_projects) return rest_utils.get_all( resources.Workflows, resources.Workflow, db_api.get_workflow_definitions, db_api.get_workflow_definition_by_id, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, all_projects=all_projects, **filters ) mistral-6.0.0/mistral/api/controllers/v2/workbook.py0000666000175100017510000001421213245513261022550 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging import pecan from pecan import hooks from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral.api.controllers.v2 import validation from mistral.api.hooks import content_type as ct_hook from mistral import context from mistral.db.v2 import api as db_api from mistral.lang import parser as spec_parser from mistral.services import workbooks from mistral.utils import filter_utils from mistral.utils import rest_utils LOG = logging.getLogger(__name__) class WorkbooksController(rest.RestController, hooks.HookController): __hooks__ = [ct_hook.ContentTypeHook("application/json", ['POST', 'PUT'])] validate = validation.SpecValidationController( spec_parser.get_workbook_spec_from_yaml) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Workbook, wtypes.text) def get(self, name): """Return the named workbook. :param name: Name of workbook to retrieve """ acl.enforce('workbooks:get', context.ctx()) LOG.debug("Fetch workbook [name=%s]", name) # Use retries to prevent possible failures. r = rest_utils.create_db_retry_object() db_model = r.call(db_api.get_workbook, name) return resources.Workbook.from_db_model(db_model) @rest_utils.wrap_pecan_controller_exception @pecan.expose(content_type="text/plain") def put(self): """Update a workbook.""" acl.enforce('workbooks:update', context.ctx()) definition = pecan.request.text LOG.debug("Update workbook [definition=%s]", definition) wb_db = workbooks.update_workbook_v2(definition) return resources.Workbook.from_db_model(wb_db).to_json() @rest_utils.wrap_pecan_controller_exception @pecan.expose(content_type="text/plain") def post(self): """Create a new workbook.""" acl.enforce('workbooks:create', context.ctx()) definition = pecan.request.text LOG.debug("Create workbook [definition=%s]", definition) wb_db = workbooks.create_workbook_v2(definition) pecan.response.status = 201 return resources.Workbook.from_db_model(wb_db).to_json() @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, name): """Delete the named workbook. :param name: Name of workbook to delete """ acl.enforce('workbooks:delete', context.ctx()) LOG.debug("Delete workbook [name=%s]", name) db_api.delete_workbook(name) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Workbooks, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, wtypes.text, resources.SCOPE_TYPES, wtypes.text, wtypes.text) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', created_at=None, definition=None, name=None, scope=None, tags=None, updated_at=None): """Return a list of workbooks. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: asc. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param definition: Optional. Keep only resources with a specific definition. :param tags: Optional. Keep only resources containing specific tags. :param scope: Optional. Keep only resources with a specific scope. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. """ acl.enforce('workbooks:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, definition=definition, name=name, scope=scope, tags=tags, updated_at=updated_at ) LOG.debug("Fetch workbooks. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, fields=%s, filters=%s", marker, limit, sort_keys, sort_dirs, fields, filters) return rest_utils.get_all( resources.Workbooks, resources.Workbook, db_api.get_workbooks, db_api.get_workbook, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **filters ) mistral-6.0.0/mistral/api/controllers/v2/event_trigger.py0000666000175100017510000001337213245513261023565 0ustar zuulzuul00000000000000# Copyright 2016 - IBM Corp. # Copyright 2016 Catalyst IT Limited # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from pecan import rest import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral import context as auth_ctx from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import triggers from mistral.utils import rest_utils LOG = logging.getLogger(__name__) UPDATE_NOT_ALLOWED = ['exchange', 'topic', 'event'] CREATE_MANDATORY = set(['exchange', 'topic', 'event', 'workflow_id']) class EventTriggersController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.EventTrigger, types.uuid) def get(self, id): """Returns the specified event_trigger.""" acl.enforce('event_triggers:get', auth_ctx.ctx()) LOG.debug('Fetch event trigger [id=%s]', id) # Use retries to prevent possible failures. r = rest_utils.create_db_retry_object() db_model = r.call(db_api.get_event_trigger, id) return resources.EventTrigger.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.EventTrigger, body=resources.EventTrigger, status_code=201) def post(self, event_trigger): """Creates a new event trigger.""" acl.enforce('event_triggers:create', auth_ctx.ctx()) values = event_trigger.to_dict() input_keys = [k for k in values if values[k]] if CREATE_MANDATORY - set(input_keys): raise exc.EventTriggerException( "Params %s must be provided for creating event trigger." % CREATE_MANDATORY ) if values.get('scope') == 'public': acl.enforce('event_triggers:create:public', auth_ctx.ctx()) LOG.debug('Create event trigger: %s', values) db_model = triggers.create_event_trigger( values.get('name', ''), values.get('exchange'), values.get('topic'), values.get('event'), values.get('workflow_id'), values.get('scope'), workflow_input=values.get('workflow_input'), workflow_params=values.get('workflow_params'), ) return resources.EventTrigger.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.EventTrigger, types.uuid, body=resources.EventTrigger) def put(self, id, event_trigger): """Updates an existing event trigger. The exchange, topic and event can not be updated. The right way to change them is to delete the event trigger first, then create a new event trigger with new params. """ acl.enforce('event_triggers:update', auth_ctx.ctx()) values = event_trigger.to_dict() for field in UPDATE_NOT_ALLOWED: if values.get(field): raise exc.EventTriggerException( "Can not update fields %s of event trigger." % UPDATE_NOT_ALLOWED ) LOG.debug('Update event trigger: [id=%s, values=%s]', id, values) with db_api.transaction(): # ensure that event trigger exists db_api.get_event_trigger(id) db_model = triggers.update_event_trigger(id, values) return resources.EventTrigger.from_db_model(db_model) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, types.uuid, status_code=204) def delete(self, id): """Delete event trigger.""" acl.enforce('event_triggers:delete', auth_ctx.ctx()) LOG.debug("Delete event trigger [id=%s]", id) with db_api.transaction(): event_trigger = db_api.get_event_trigger(id) triggers.delete_event_trigger(event_trigger.to_dict()) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.EventTriggers, types.uuid, int, types.uniquelist, types.list, types.uniquelist, bool, types.jsontype) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', all_projects=False, **filters): """Return all event triggers.""" acl.enforce('event_triggers:list', auth_ctx.ctx()) if all_projects: acl.enforce('event_triggers:list:all_projects', auth_ctx.ctx()) LOG.debug( "Fetch event triggers. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, fields=%s, all_projects=%s, filters=%s", marker, limit, sort_keys, sort_dirs, fields, all_projects, filters ) return rest_utils.get_all( resources.EventTriggers, resources.EventTrigger, db_api.get_event_triggers, db_api.get_event_trigger, resource_function=None, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, all_projects=all_projects, **filters ) mistral-6.0.0/mistral/api/controllers/v2/service.py0000666000175100017510000000541313245513261022356 0ustar zuulzuul00000000000000# Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from pecan import rest import six import tooz.coordination import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources # TODO(rakhmerov): invalid dependency, a REST controller must not depend on # a launch script. from mistral.cmd import launch from mistral import context from mistral import exceptions as exc from mistral.service import coordination from mistral.utils import rest_utils LOG = logging.getLogger(__name__) class ServicesController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Services) def get_all(self): """Return all services.""" acl.enforce('services:list', context.ctx()) LOG.debug("Fetch services.") if not cfg.CONF.coordination.backend_url: raise exc.CoordinationException("Service API is not supported.") service_coordinator = coordination.get_service_coordinator() if not service_coordinator.is_active(): raise exc.CoordinationException( "Failed to connect to coordination backend." ) services_list = [] service_group = ['%s_group' % i for i in launch.LAUNCH_OPTIONS] try: for group in service_group: members = service_coordinator.get_members(group) members_list = [ resources.Service.from_dict( { 'type': group, 'name': member } ) for member in members ] services_list.extend(members_list) except tooz.coordination.ToozError as e: # In the scenario of network interruption or manually shutdown # connection shutdown, ToozError will be raised. raise exc.CoordinationException( "Failed to get service members from coordination backend. %s" % six.text_type(e) ) return resources.Services(services=services_list) mistral-6.0.0/mistral/api/controllers/v2/root.py0000666000175100017510000000441213245513261021677 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import pecan from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api.controllers import resource from mistral.api.controllers.v2 import action from mistral.api.controllers.v2 import action_execution from mistral.api.controllers.v2 import cron_trigger from mistral.api.controllers.v2 import environment from mistral.api.controllers.v2 import event_trigger from mistral.api.controllers.v2 import execution from mistral.api.controllers.v2 import service from mistral.api.controllers.v2 import task from mistral.api.controllers.v2 import workbook from mistral.api.controllers.v2 import workflow class RootResource(resource.Resource): """Root resource for API version 2. It references all other resources belonging to the API. """ uri = wtypes.text # TODO(everyone): what else do we need here? # TODO(everyone): we need to collect all the links from API v2.0 # and provide them. class Controller(object): """API root controller for version 2.""" workbooks = workbook.WorkbooksController() actions = action.ActionsController() workflows = workflow.WorkflowsController() executions = execution.ExecutionsController() tasks = task.TasksController() cron_triggers = cron_trigger.CronTriggersController() environments = environment.EnvironmentController() action_executions = action_execution.ActionExecutionsController() services = service.ServicesController() event_triggers = event_trigger.EventTriggersController() @wsme_pecan.wsexpose(RootResource) def index(self): return RootResource(uri='%s/%s' % (pecan.request.host_url, 'v2')) mistral-6.0.0/mistral/api/controllers/v2/types.py0000666000175100017510000000636013245513261022064 0ustar zuulzuul00000000000000# Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from oslo_utils import uuidutils import six from wsme import types as wtypes from mistral import exceptions as exc class ListType(wtypes.UserType): """A simple list type.""" basetype = wtypes.text name = 'list' @staticmethod def validate(value): """Validate and convert the input to a ListType. :param value: A comma separated string of values :returns: A list of values. """ items = [v.strip().lower() for v in six.text_type(value).split(',')] # remove empty items. return [x for x in items if x] @staticmethod def frombasetype(value): return ListType.validate(value) if value is not None else None class UniqueListType(ListType): """A simple list type with no duplicate items.""" name = 'uniquelist' @staticmethod def validate(value): """Validate and convert the input to a UniqueListType. :param value: A comma separated string of values. :returns: A list with no duplicate items. """ items = ListType.validate(value) seen = set() return [x for x in items if not (x in seen or seen.add(x))] @staticmethod def frombasetype(value): return UniqueListType.validate(value) if value is not None else None class UuidType(wtypes.UserType): """A simple UUID type. The builtin UuidType class in wsme.types doesn't work properly with pecan. """ basetype = wtypes.text name = 'uuid' @staticmethod def validate(value): if not uuidutils.is_uuid_like(value): raise exc.InputException( "Expected a uuid but received %s." % value ) return value @staticmethod def frombasetype(value): return UuidType.validate(value) if value is not None else None class JsonType(wtypes.UserType): """A simple JSON type.""" basetype = wtypes.text name = 'json' def validate(self, value): if not value: return {} if not isinstance(value, dict): raise exc.InputException( 'JsonType field value must be a dictionary [actual=%s]' % value ) return value def frombasetype(self, value): if isinstance(value, dict): return value try: return json.loads(value) if value is not None else None except TypeError as e: raise ValueError(e) def tobasetype(self, value): # Value must be a dict. return json.dumps(value) if value is not None else None uuid = UuidType() list = ListType() uniquelist = UniqueListType() jsontype = JsonType() mistral-6.0.0/mistral/api/controllers/v2/action_execution.py0000666000175100017510000004441513245513261024263 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from pecan import rest import sqlalchemy as sa import tenacity from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral import context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.rpc import clients as rpc from mistral.utils import filter_utils from mistral.utils import rest_utils from mistral.workflow import states from mistral_lib import actions as ml_actions LOG = logging.getLogger(__name__) SUPPORTED_TRANSITION_STATES = [ states.SUCCESS, states.ERROR, states.CANCELLED, states.PAUSED, states.RUNNING ] def _load_deferred_output_field(action_ex): # We need to refer to this lazy-load field explicitly in # order to make sure that it is correctly loaded. hasattr(action_ex, 'output') # Use retries to prevent possible failures. @tenacity.retry( retry=tenacity.retry_if_exception_type(sa.exc.OperationalError), stop=tenacity.stop_after_attempt(10), wait=tenacity.wait_incrementing(increment=100) # 0.1 seconds ) def _get_action_execution(id): with db_api.transaction(): return _get_action_execution_resource(db_api.get_action_execution(id)) def _get_action_execution_resource(action_ex): _load_deferred_output_field(action_ex) return _get_action_execution_resource_for_list(action_ex) def _get_action_execution_resource_for_list(action_ex): # TODO(nmakhotkin): Get rid of using dicts for constructing resources. # TODO(nmakhotkin): Use db_model for this instead. res = resources.ActionExecution.from_db_model(action_ex) task_name = (action_ex.task_execution.name if action_ex.task_execution else None) setattr(res, 'task_name', task_name) return res def _get_action_executions(task_execution_id=None, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', include_output=False, **filters): """Return all action executions. Where project_id is the same as the requester or project_id is different but the scope is public. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param filters: Optional. A list of filters to apply to the result. """ if task_execution_id: filters['task_execution_id'] = task_execution_id if include_output: resource_function = _get_action_execution_resource else: resource_function = _get_action_execution_resource_for_list return rest_utils.get_all( resources.ActionExecutions, resources.ActionExecution, db_api.get_action_executions, db_api.get_action_execution, resource_function=resource_function, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **filters ) class ActionExecutionsController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.ActionExecution, wtypes.text) def get(self, id): """Return the specified action_execution. :param id: UUID of action execution to retrieve """ acl.enforce('action_executions:get', context.ctx()) LOG.debug("Fetch action_execution [id=%s]", id) return _get_action_execution(id) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.ActionExecution, body=resources.ActionExecution, status_code=201) def post(self, action_ex): """Create new action_execution. :param action_ex: Action to execute """ acl.enforce('action_executions:create', context.ctx()) LOG.debug( "Create action_execution [action_execution=%s]", action_ex ) name = action_ex.name description = action_ex.description or None action_input = action_ex.input or {} params = action_ex.params or {} if not name: raise exc.InputException( "Please provide at least action name to run action." ) values = rpc.get_engine_client().start_action( name, action_input, description=description, **params ) return resources.ActionExecution.from_dict(values) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose( resources.ActionExecution, wtypes.text, body=resources.ActionExecution ) def put(self, id, action_ex): """Update the specified action_execution. :param id: UUID of action execution to update :param action_ex: Action execution for update """ acl.enforce('action_executions:update', context.ctx()) LOG.debug( "Update action_execution [id=%s, action_execution=%s]", id, action_ex ) if action_ex.state not in SUPPORTED_TRANSITION_STATES: raise exc.InvalidResultException( "Error. Expected one of %s, actual: %s" % ( SUPPORTED_TRANSITION_STATES, action_ex.state ) ) if states.is_completed(action_ex.state): output = action_ex.output if action_ex.state == states.SUCCESS: result = ml_actions.Result(data=output) elif action_ex.state == states.ERROR: if not output: output = 'Unknown error' result = ml_actions.Result(error=output) elif action_ex.state == states.CANCELLED: result = ml_actions.Result(cancel=True) values = rpc.get_engine_client().on_action_complete(id, result) if action_ex.state in [states.PAUSED, states.RUNNING]: state = action_ex.state values = rpc.get_engine_client().on_action_update(id, state) return resources.ActionExecution.from_dict(values) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.ActionExecutions, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, wtypes.text, wtypes.text, wtypes.text, wtypes.text, types.uuid, wtypes.text, wtypes.text, bool, types.jsontype, types.jsontype, types.jsontype, wtypes.text, bool) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', created_at=None, name=None, tags=None, updated_at=None, workflow_name=None, task_name=None, task_execution_id=None, state=None, state_info=None, accepted=None, input=None, output=None, params=None, description=None, include_output=False): """Return all tasks within the execution. Where project_id is the same as the requester or project_id is different but the scope is public. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param workflow_name: Optional. Keep only resources with a specific workflow name. :param task_name: Optional. Keep only resources with a specific task name. :param task_execution_id: Optional. Keep only resources within a specific task execution. :param state: Optional. Keep only resources with a specific state. :param state_info: Optional. Keep only resources with specific state information. :param accepted: Optional. Keep only resources which have been accepted or not. :param input: Optional. Keep only resources with a specific input. :param output: Optional. Keep only resources with a specific output. :param params: Optional. Keep only resources with specific parameters. :param description: Optional. Keep only resources with a specific description. :param tags: Optional. Keep only resources containing specific tags. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. :param include_output: Optional. Include the output for all executions in the list """ acl.enforce('action_executions:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, name=name, tags=tags, updated_at=updated_at, workflow_name=workflow_name, task_name=task_name, task_execution_id=task_execution_id, state=state, state_info=state_info, accepted=accepted, input=input, output=output, params=params, description=description ) LOG.debug( "Fetch action_executions. marker=%s, limit=%s, " "sort_keys=%s, sort_dirs=%s, filters=%s", marker, limit, sort_keys, sort_dirs, filters ) return _get_action_executions( marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, include_output=include_output, **filters ) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, id): """Delete the specified action_execution. :param id: UUID of action execution to delete """ acl.enforce('action_executions:delete', context.ctx()) LOG.debug("Delete action_execution [id=%s]", id) if not cfg.CONF.api.allow_action_execution_deletion: raise exc.NotAllowedException("Action execution deletion is not " "allowed.") with db_api.transaction(): action_ex = db_api.get_action_execution(id) if action_ex.task_execution_id: raise exc.NotAllowedException( "Only ad-hoc action execution can be deleted." ) if not states.is_completed(action_ex.state): raise exc.NotAllowedException( "Only completed action execution can be deleted." ) return db_api.delete_action_execution(id) class TasksActionExecutionController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.ActionExecutions, types.uuid, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, types.uniquelist, wtypes.text, wtypes.text, wtypes.text, wtypes.text, wtypes.text, wtypes.text, bool, types.jsontype, types.jsontype, types.jsontype, wtypes.text, bool) def get_all(self, task_execution_id, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', created_at=None, name=None, tags=None, updated_at=None, workflow_name=None, task_name=None, state=None, state_info=None, accepted=None, input=None, output=None, params=None, description=None, include_output=None): """Return all tasks within the execution. Where project_id is the same as the requester or project_id is different but the scope is public. :param task_execution_id: Keep only resources within a specific task execution. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param workflow_name: Optional. Keep only resources with a specific workflow name. :param task_name: Optional. Keep only resources with a specific task name. :param state: Optional. Keep only resources with a specific state. :param state_info: Optional. Keep only resources with specific state information. :param accepted: Optional. Keep only resources which have been accepted or not. :param input: Optional. Keep only resources with a specific input. :param output: Optional. Keep only resources with a specific output. :param params: Optional. Keep only resources with specific parameters. :param description: Optional. Keep only resources with a specific description. :param tags: Optional. Keep only resources containing specific tags. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. :param include_output: Optional. Include the output for all executions in the list """ acl.enforce('action_executions:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, name=name, tags=tags, updated_at=updated_at, workflow_name=workflow_name, task_name=task_name, task_execution_id=task_execution_id, state=state, state_info=state_info, accepted=accepted, input=input, output=output, params=params, description=description ) LOG.debug( "Fetch action_executions. marker=%s, limit=%s, " "sort_keys=%s, sort_dirs=%s, filters=%s", marker, limit, sort_keys, sort_dirs, filters ) return _get_action_executions( marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, include_output=include_output, **filters ) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.ActionExecution, wtypes.text, wtypes.text) def get(self, task_execution_id, action_ex_id): """Return the specified action_execution. :param task_execution_id: Task execution UUID :param action_ex_id: Action execution UUID """ acl.enforce('action_executions:get', context.ctx()) LOG.debug("Fetch action_execution [id=%s]", action_ex_id) return _get_action_execution(action_ex_id) mistral-6.0.0/mistral/api/controllers/v2/member.py0000666000175100017510000001340613245513261022166 0ustar zuulzuul00000000000000# Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools from oslo_config import cfg from oslo_log import log as logging from pecan import rest from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import resources from mistral import context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.utils import rest_utils LOG = logging.getLogger(__name__) CONF = cfg.CONF def auth_enable_check(func): @functools.wraps(func) def wrapped(*args, **kwargs): if not CONF.pecan.auth_enable: msg = ("Resource sharing feature can only be supported with " "authentication enabled.") raise exc.WorkflowException(msg) return func(*args, **kwargs) return wrapped class MembersController(rest.RestController): def __init__(self, type, resource_id): self.type = type self.resource_id = resource_id super(MembersController, self).__init__() @rest_utils.wrap_pecan_controller_exception @auth_enable_check @wsme_pecan.wsexpose(resources.Member, wtypes.text) def get(self, member_id): """Shows resource member details.""" acl.enforce('members:get', context.ctx()) LOG.debug( "Fetch resource member [resource_id=%s, resource_type=%s, " "member_id=%s].", self.resource_id, self.type, member_id ) # Use retries to prevent possible failures. r = rest_utils.create_db_retry_object() member_db = r.call( db_api.get_resource_member, self.resource_id, self.type, member_id ) return resources.Member.from_db_model(member_db) @rest_utils.wrap_pecan_controller_exception @auth_enable_check @wsme_pecan.wsexpose(resources.Members) def get_all(self): """Return all members with whom the resource has been shared.""" acl.enforce('members:list', context.ctx()) LOG.debug( "Fetch resource members [resource_id=%s, resource_type=%s].", self.resource_id, self.type ) db_members = db_api.get_resource_members( self.resource_id, self.type ) members = [ resources.Member.from_db_model(db_member) for db_member in db_members ] return resources.Members(members=members) @rest_utils.wrap_pecan_controller_exception @auth_enable_check @wsme_pecan.wsexpose( resources.Member, body=resources.Member, status_code=201 ) def post(self, member_info): """Shares the resource to a new member.""" acl.enforce('members:create', context.ctx()) LOG.debug( "Share resource to a member. [resource_id=%s, " "resource_type=%s, member_info=%s].", self.resource_id, self.type, member_info ) if not member_info.member_id: raise exc.WorkflowException("Member id must be provided.") with db_api.transaction(): wf_db = db_api.get_workflow_definition(self.resource_id) if wf_db.scope != 'private': raise exc.WorkflowException( "Only private resource could be shared." ) resource_member = { 'resource_id': self.resource_id, 'resource_type': self.type, 'member_id': member_info.member_id, 'status': 'pending' } db_member = db_api.create_resource_member(resource_member) return resources.Member.from_db_model(db_member) @rest_utils.wrap_pecan_controller_exception @auth_enable_check @wsme_pecan.wsexpose(resources.Member, wtypes.text, body=resources.Member) def put(self, member_id, member_info): """Sets the status for a resource member.""" acl.enforce('members:update', context.ctx()) LOG.debug( "Update resource member status. [resource_id=%s, " "member_id=%s, member_info=%s].", self.resource_id, member_id, member_info ) if not member_info.status: msg = "Status must be provided." raise exc.WorkflowException(msg) db_member = db_api.update_resource_member( self.resource_id, self.type, member_id, {'status': member_info.status} ) return resources.Member.from_db_model(db_member) @rest_utils.wrap_pecan_controller_exception @auth_enable_check @wsme_pecan.wsexpose(None, wtypes.text, status_code=204) def delete(self, member_id): """Deletes a member from the member list of a resource.""" acl.enforce('members:delete', context.ctx()) LOG.debug( "Delete resource member. [resource_id=%s, " "resource_type=%s, member_id=%s].", self.resource_id, self.type, member_id ) db_api.delete_resource_member( self.resource_id, self.type, member_id ) mistral-6.0.0/mistral/api/controllers/v2/task.py0000666000175100017510000004210513245513261021657 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from oslo_log import log as logging from pecan import rest import sqlalchemy as sa import tenacity from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api import access_control as acl from mistral.api.controllers.v2 import action_execution from mistral.api.controllers.v2 import resources from mistral.api.controllers.v2 import types from mistral import context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.rpc import clients as rpc from mistral.utils import filter_utils from mistral.utils import rest_utils from mistral.workflow import data_flow from mistral.workflow import states LOG = logging.getLogger(__name__) STATE_TYPES = wtypes.Enum(str, states.IDLE, states.RUNNING, states.SUCCESS, states.ERROR, states.RUNNING_DELAYED) def _get_task_resource_with_result(task_ex): task = resources.Task.from_db_model(task_ex) task.result = json.dumps(data_flow.get_task_execution_result(task_ex)) return task class TaskExecutionsController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Executions, types.uuid, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, types.uuid, wtypes.text, types.jsontype, STATE_TYPES, wtypes.text, types.jsontype, types.jsontype, wtypes.text, wtypes.text) def get_all(self, task_execution_id, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', workflow_name=None, workflow_id=None, description=None, params=None, state=None, state_info=None, input=None, output=None, created_at=None, updated_at=None): """Return all executions that belong to the given task execution. :param task_execution_id: Task task execution ID. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param workflow_name: Optional. Keep only resources with a specific workflow name. :param workflow_id: Optional. Keep only resources with a specific workflow ID. :param description: Optional. Keep only resources with a specific description. :param params: Optional. Keep only resources with specific parameters. :param state: Optional. Keep only resources with a specific state. :param state_info: Optional. Keep only resources with specific state information. :param input: Optional. Keep only resources with a specific input. :param output: Optional. Keep only resources with a specific output. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. """ acl.enforce('executions:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( task_execution_id=task_execution_id, created_at=created_at, workflow_name=workflow_name, workflow_id=workflow_id, params=params, state=state, state_info=state_info, input=input, output=output, updated_at=updated_at, description=description ) LOG.debug( "Fetch executions. marker=%s, limit=%s, sort_keys=%s, " "sort_dirs=%s, filters=%s", marker, limit, sort_keys, sort_dirs, filters ) return rest_utils.get_all( resources.Executions, resources.Execution, db_api.get_workflow_executions, db_api.get_workflow_execution, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **filters ) # Use retries to prevent possible failures. @tenacity.retry( retry=tenacity.retry_if_exception_type(sa.exc.OperationalError), stop=tenacity.stop_after_attempt(10), wait=tenacity.wait_incrementing(increment=100) # 0.1 seconds ) def _get_task_execution(id): with db_api.transaction(): task_ex = db_api.get_task_execution(id) return _get_task_resource_with_result(task_ex) class TasksController(rest.RestController): action_executions = action_execution.TasksActionExecutionController() workflow_executions = TaskExecutionsController() @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Task, wtypes.text) def get(self, id): """Return the specified task. :param id: UUID of task to retrieve """ acl.enforce('tasks:get', context.ctx()) LOG.debug("Fetch task [id=%s]", id) return _get_task_execution(id) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Tasks, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, types.uuid, types.uuid, STATE_TYPES, wtypes.text, wtypes.text, types.jsontype, bool, wtypes.text, wtypes.text, bool, types.jsontype) def get_all(self, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', name=None, workflow_name=None, workflow_id=None, workflow_execution_id=None, state=None, state_info=None, result=None, published=None, processed=None, created_at=None, updated_at=None, reset=None, env=None): """Return all tasks. Where project_id is the same as the requester or project_id is different but the scope is public. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param workflow_name: Optional. Keep only resources with a specific workflow name. :param workflow_id: Optional. Keep only resources with a specific workflow ID. :param workflow_execution_id: Optional. Keep only resources with a specific workflow execution ID. :param state: Optional. Keep only resources with a specific state. :param state_info: Optional. Keep only resources with specific state information. :param result: Optional. Keep only resources with a specific result. :param published: Optional. Keep only resources with specific published content. :param processed: Optional. Keep only resources which have been processed or not. :param reset: Optional. Keep only resources which have been reset or not. :param env: Optional. Keep only resources with a specific environment. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. """ acl.enforce('tasks:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( created_at=created_at, workflow_name=workflow_name, workflow_id=workflow_id, state=state, state_info=state_info, updated_at=updated_at, name=name, workflow_execution_id=workflow_execution_id, result=result, published=published, processed=processed, reset=reset, env=env ) LOG.debug( "Fetch tasks. marker=%s, limit=%s, sort_keys=%s, sort_dirs=%s," " filters=%s", marker, limit, sort_keys, sort_dirs, filters ) return rest_utils.get_all( resources.Tasks, resources.Task, db_api.get_task_executions, db_api.get_task_execution, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **filters ) @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Task, wtypes.text, body=resources.Task) def put(self, id, task): """Update the specified task execution. :param id: Task execution ID. :param task: Task execution object. """ acl.enforce('tasks:update', context.ctx()) LOG.debug("Update task execution [id=%s, task=%s]", id, task) with db_api.transaction(): task_ex = db_api.get_task_execution(id) task_spec = spec_parser.get_task_spec(task_ex.spec) task_name = task.name or None reset = task.reset env = task.env or None if task_name and task_name != task_ex.name: raise exc.WorkflowException('Task name does not match.') wf_ex = db_api.get_workflow_execution( task_ex.workflow_execution_id ) wf_name = task.workflow_name or None if wf_name and wf_name != wf_ex.name: raise exc.WorkflowException('Workflow name does not match.') if task.state != states.RUNNING: raise exc.WorkflowException( 'Invalid task state. ' 'Only updating task to rerun is supported.' ) if task_ex.state != states.ERROR: raise exc.WorkflowException( 'The current task execution must be in ERROR for rerun.' ' Only updating task to rerun is supported.' ) if not task_spec.get_with_items() and not reset: raise exc.WorkflowException( 'Only with-items task has the option to not reset.' ) rpc.get_engine_client().rerun_workflow( task_ex.id, reset=reset, env=env ) with db_api.transaction(): task_ex = db_api.get_task_execution(id) return _get_task_resource_with_result(task_ex) class ExecutionTasksController(rest.RestController): @rest_utils.wrap_wsme_controller_exception @wsme_pecan.wsexpose(resources.Tasks, types.uuid, types.uuid, int, types.uniquelist, types.list, types.uniquelist, wtypes.text, wtypes.text, types.uuid, STATE_TYPES, wtypes.text, wtypes.text, types.jsontype, bool, wtypes.text, wtypes.text, bool, types.jsontype) def get_all(self, workflow_execution_id, marker=None, limit=None, sort_keys='created_at', sort_dirs='asc', fields='', name=None, workflow_name=None, workflow_id=None, state=None, state_info=None, result=None, published=None, processed=None, created_at=None, updated_at=None, reset=None, env=None): """Return all tasks within the execution. Where project_id is the same as the requester or project_id is different but the scope is public. :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. Columns to sort results by. Default: created_at, which is backward compatible. :param sort_dirs: Optional. Directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: desc. The length of sort_dirs can be equal or less than that of sort_keys. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param name: Optional. Keep only resources with a specific name. :param workflow_name: Optional. Keep only resources with a specific workflow name. :param workflow_id: Optional. Keep only resources with a specific workflow ID. :param workflow_execution_id: Optional. Keep only resources with a specific workflow execution ID. :param state: Optional. Keep only resources with a specific state. :param state_info: Optional. Keep only resources with specific state information. :param result: Optional. Keep only resources with a specific result. :param published: Optional. Keep only resources with specific published content. :param processed: Optional. Keep only resources which have been processed or not. :param reset: Optional. Keep only resources which have been reset or not. :param env: Optional. Keep only resources with a specific environment. :param created_at: Optional. Keep only resources created at a specific time and date. :param updated_at: Optional. Keep only resources with specific latest update time and date. """ acl.enforce('tasks:list', context.ctx()) filters = filter_utils.create_filters_from_request_params( workflow_execution_id=workflow_execution_id, created_at=created_at, workflow_name=workflow_name, workflow_id=workflow_id, state=state, state_info=state_info, updated_at=updated_at, name=name, result=result, published=published, processed=processed, reset=reset, env=env ) LOG.debug( "Fetch tasks. workflow_execution_id=%s, marker=%s, limit=%s, " "sort_keys=%s, sort_dirs=%s, filters=%s", workflow_execution_id, marker, limit, sort_keys, sort_dirs, filters ) return rest_utils.get_all( resources.Tasks, resources.Task, db_api.get_task_executions, db_api.get_task_execution, marker=marker, limit=limit, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **filters ) mistral-6.0.0/mistral/api/controllers/resource.py0000666000175100017510000001066313245513261022221 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from wsme import types as wtypes from mistral import utils class Resource(wtypes.Base): """REST API Resource.""" _wsme_attributes = [] def to_dict(self): d = {} for attr in self._wsme_attributes: attr_val = getattr(self, attr.name) if not isinstance(attr_val, wtypes.UnsetType): d[attr.name] = attr_val return d @classmethod def from_tuples(cls, tuple_iterator): obj = cls() for col_name, col_val in tuple_iterator: if hasattr(obj, col_name): # Convert all datetime values to strings. setattr(obj, col_name, utils.datetime_to_str(col_val)) return obj @classmethod def from_dict(cls, d): return cls.from_tuples(d.items()) @classmethod def from_db_model(cls, db_model): return cls.from_tuples(db_model.iter_columns()) def __str__(self): """WSME based implementation of __str__.""" res = "%s [" % type(self).__name__ first = True for attr in self._wsme_attributes: if not first: res += ', ' else: first = False res += "%s='%s'" % (attr.name, getattr(self, attr.name)) return res + "]" def to_json(self): return json.dumps(self.to_dict()) @classmethod def get_fields(cls): obj = cls() return [attr.name for attr in obj._wsme_attributes] class ResourceList(Resource): """Resource containing the list of other resources.""" next = wtypes.text """A link to retrieve the next subset of the resource list""" @property def collection(self): return getattr(self, self._type) @classmethod def convert_with_links(cls, resources, limit, url=None, fields=None, **kwargs): resource_list = cls() setattr(resource_list, resource_list._type, resources) resource_list.next = resource_list.get_next( limit, url=url, fields=fields, **kwargs ) return resource_list def has_next(self, limit): """Return whether resources has more items.""" return len(self.collection) and len(self.collection) == limit def get_next(self, limit, url=None, fields=None, **kwargs): """Return a link to the next subset of the resources.""" if not self.has_next(limit): return wtypes.Unset q_args = ''.join( ['%s=%s&' % (key, value) for key, value in kwargs.items()] ) resource_args = ( '?%(args)slimit=%(limit)d&marker=%(marker)s' % { 'args': q_args, 'limit': limit, 'marker': self.collection[-1].id } ) # Fields is handled specially here, we can move it above when it's # supported by all resources query. if fields: resource_args += '&fields=%s' % fields next_link = "%(host_url)s/v2/%(resource)s%(args)s" % { 'host_url': url, 'resource': self._type, 'args': resource_args } return next_link def to_dict(self): d = {} for attr in self._wsme_attributes: attr_val = getattr(self, attr.name) if isinstance(attr_val, list): if isinstance(attr_val[0], Resource): d[attr.name] = [v.to_dict() for v in attr_val] elif not isinstance(attr_val, wtypes.UnsetType): d[attr.name] = attr_val return d class Link(Resource): """Web link.""" href = wtypes.text target = wtypes.text rel = wtypes.text @classmethod def sample(cls): return cls(href='http://example.com/here', target='here', rel='self') mistral-6.0.0/mistral/api/controllers/__init__.py0000666000175100017510000000000013245513261022111 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/api/controllers/root.py0000666000175100017510000000435413245513261021355 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging import pecan from wsme import types as wtypes import wsmeext.pecan as wsme_pecan from mistral.api.controllers import resource from mistral.api.controllers.v2 import root as v2_root LOG = logging.getLogger(__name__) API_STATUS = wtypes.Enum(str, 'SUPPORTED', 'CURRENT', 'DEPRECATED') class APIVersion(resource.Resource): """An API Version.""" id = wtypes.text "The version identifier." status = API_STATUS "The status of the API (SUPPORTED, CURRENT or DEPRECATED)." links = wtypes.ArrayType(resource.Link) "The link to the versioned API." @classmethod def sample(cls): return cls( id='v1.0', status='CURRENT', links=[ resource.Link(target_name='v1', rel="self", href='http://example.com:9777/v1') ] ) class APIVersions(resource.Resource): """API Versions.""" versions = wtypes.ArrayType(APIVersion) @classmethod def sample(cls): v2 = APIVersion(id='v2.0', status='CURRENT', rel="self", href='http://example.com:9777/v2') return cls(versions=[v2]) class RootController(object): v2 = v2_root.Controller() @wsme_pecan.wsexpose(APIVersions) def index(self): LOG.debug("Fetching API versions.") host_url_v2 = '%s/%s' % (pecan.request.host_url, 'v2') api_v2 = APIVersion( id='v2.0', status='CURRENT', links=[resource.Link(href=host_url_v2, target='v2', rel="self",)] ) return APIVersions(versions=[api_v2]) mistral-6.0.0/mistral/api/service.py0000666000175100017510000000413113245513261017455 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_concurrency import processutils from oslo_config import cfg from oslo_service import service from oslo_service import wsgi from mistral.api import app from mistral.rpc import clients as rpc_clients class WSGIService(service.ServiceBase): """Provides ability to launch Mistral API from wsgi app.""" def __init__(self, name): self.name = name self.app = app.setup_app() self.workers = ( cfg.CONF.api.api_workers or processutils.get_worker_count() ) self.server = wsgi.Server( cfg.CONF, name, self.app, host=cfg.CONF.api.host, port=cfg.CONF.api.port, use_ssl=cfg.CONF.api.enable_ssl_api ) def start(self): # NOTE: When oslo.service creates an API worker it forks a new child # system process. The child process is created as precise copy of the # parent process (see how os.fork() works) and within the child process # oslo.service calls service's start() method again to reinitialize # what's needed. So we must clean up all RPC clients so that RPC works # properly (e.g. message routing for synchronous calls may be based on # generated queue names). rpc_clients.cleanup() self.server.start() print('API server started.') def stop(self): self.server.stop() def wait(self): self.server.wait() def reset(self): self.server.reset() mistral-6.0.0/mistral/api/access_control.py0000666000175100017510000000717413245513261021030 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Access Control API server.""" from keystonemiddleware import auth_token from oslo_config import cfg from oslo_policy import policy from mistral import exceptions as exc from mistral import policies _ENFORCER = None def setup(app): if cfg.CONF.pecan.auth_enable and cfg.CONF.auth_type == 'keystone': conf = dict(cfg.CONF.keystone_authtoken) # Change auth decisions of requests to the app itself. conf.update({'delay_auth_decision': True}) # NOTE(rakhmerov): Policy enforcement works only if Keystone # authentication is enabled. No support for other authentication # types at this point. _ensure_enforcer_initialization() return auth_token.AuthProtocol(app, conf) else: return app def enforce(action, context, target=None, do_raise=True, exc=exc.NotAllowedException): """Verifies that the action is valid on the target in this context. :param action: String, representing the action to be checked. This should be colon separated for clarity. i.e. ``workflows:create`` :param context: Mistral context. :param target: Dictionary, representing the object of the action. For object creation, this should be a dictionary representing the location of the object. e.g. ``{'project_id': context.project_id}`` :param do_raise: if True (the default), raises specified exception. :param exc: Exception to be raised if not authorized. Default is mistral.exceptions.NotAllowedException. :return: returns True if authorized and False if not authorized and do_raise is False. """ target_obj = { 'project_id': context.project_id, 'user_id': context.user_id, } target_obj.update(target or {}) policy_context = context.to_policy_values() # Because policy.json or policy.yaml example in Mistral repo still uses # the rule 'is_admin: True', we insert 'is_admin' key to the default # policy values. policy_context['is_admin'] = context.is_admin _ensure_enforcer_initialization() return _ENFORCER.authorize( action, target_obj, policy_context, do_raise=do_raise, exc=exc ) def _ensure_enforcer_initialization(): global _ENFORCER if not _ENFORCER: _ENFORCER = policy.Enforcer(cfg.CONF) _ENFORCER.register_defaults(policies.list_rules()) _ENFORCER.load_rules() def get_limited_to(headers): """Return the user and project the request should be limited to. :param headers: HTTP headers dictionary :return: A tuple of (user, project), set to None if there's no limit on one of these. """ return headers.get('X-User-Id'), headers.get('X-Project-Id') def get_limited_to_project(headers): """Return the project the request should be limited to. :param headers: HTTP headers dictionary :return: A project, or None if there's no limit on it. """ return get_limited_to(headers)[1] mistral-6.0.0/mistral/db/0000775000175100017510000000000013245513604015257 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/v2/0000775000175100017510000000000013245513604015606 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/v2/sqlalchemy/0000775000175100017510000000000013245513604017750 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/v2/sqlalchemy/models.py0000666000175100017510000004227613245513261021621 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import hashlib import json import sys from oslo_config import cfg from oslo_log import log as logging import sqlalchemy as sa from sqlalchemy import event from sqlalchemy.orm import backref from sqlalchemy.orm import relationship from mistral.db.sqlalchemy import model_base as mb from mistral.db.sqlalchemy import types as st from mistral import exceptions as exc from mistral.services import security from mistral import utils # Definition objects. LOG = logging.getLogger(__name__) def _get_hash_function_by(column_name): def calc_hash(context): val = context.current_parameters[column_name] or {} if isinstance(val, dict): # If the value is a dictionary we need to make sure to have # keys in the same order in a string representation. hash_base = json.dumps(sorted(val.items())) else: hash_base = str(val) return hashlib.sha256(hash_base.encode('utf-8')).hexdigest() return calc_hash def validate_long_type_length(cls, field_name, value): """Makes sure the value does not exceeds the maximum size.""" if value: # Get the configured limit. size_limit_kb = cfg.CONF.engine.execution_field_size_limit_kb # If the size is unlimited. if size_limit_kb < 0: return size_kb = int(sys.getsizeof(str(value)) / 1024) if size_kb > size_limit_kb: LOG.error( "Size limit %dKB exceed for class [%s], " "field %s of size %dKB.", size_limit_kb, str(cls), field_name, size_kb ) raise exc.SizeLimitExceededException( field_name, size_kb, size_limit_kb ) def register_length_validator(attr_name): """Register an event listener on the attribute. This event listener will validate the size every time a 'set' occurs. """ for cls in utils.iter_subclasses(Execution): if hasattr(cls, attr_name): event.listen( getattr(cls, attr_name), 'set', lambda t, v, o, i: validate_long_type_length(cls, attr_name, v) ) class Definition(mb.MistralSecureModelBase): __abstract__ = True id = mb.id_column() name = sa.Column(sa.String(255)) definition = sa.Column(st.MediumText(), nullable=True) spec = sa.Column(st.JsonMediumDictType()) tags = sa.Column(st.JsonListType()) is_system = sa.Column(sa.Boolean()) # There's no WorkbookExecution so we safely omit "Definition" in the name. class Workbook(Definition): """Contains info about workbook (including definition in Mistral DSL).""" __tablename__ = 'workbooks_v2' __table_args__ = ( sa.UniqueConstraint('name', 'project_id'), sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), ) class WorkflowDefinition(Definition): """Contains info about workflow (including definition in Mistral DSL).""" __tablename__ = 'workflow_definitions_v2' namespace = sa.Column(sa.String(255), nullable=True) __table_args__ = ( sa.UniqueConstraint( 'name', 'namespace', 'project_id' ), sa.Index('%s_is_system' % __tablename__, 'is_system'), sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), ) class ActionDefinition(Definition): """Contains info about registered Actions.""" __tablename__ = 'action_definitions_v2' __table_args__ = ( sa.UniqueConstraint('name', 'project_id'), sa.Index('%s_is_system' % __tablename__, 'is_system'), sa.Index('%s_action_class' % __tablename__, 'action_class'), sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), ) # Main properties. description = sa.Column(sa.Text()) input = sa.Column(sa.Text()) # Service properties. action_class = sa.Column(sa.String(200)) attributes = sa.Column(st.JsonDictType()) # Execution objects. class Execution(mb.MistralSecureModelBase): __abstract__ = True # Common properties. id = mb.id_column() name = sa.Column(sa.String(255)) description = sa.Column(sa.String(255), nullable=True) workflow_name = sa.Column(sa.String(255)) workflow_namespace = sa.Column(sa.String(255)) workflow_id = sa.Column(sa.String(80)) spec = sa.Column(st.JsonMediumDictType()) state = sa.Column(sa.String(20)) state_info = sa.Column(sa.Text(), nullable=True) tags = sa.Column(st.JsonListType()) # Internal properties which can be used by engine. runtime_context = sa.Column(st.JsonLongDictType()) class ActionExecution(Execution): """Contains action execution information.""" __tablename__ = 'action_executions_v2' __table_args__ = ( sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), sa.Index('%s_state' % __tablename__, 'state'), sa.Index('%s_updated_at' % __tablename__, 'updated_at') ) # Main properties. accepted = sa.Column(sa.Boolean(), default=False) input = sa.Column(st.JsonLongDictType(), nullable=True) output = sa.orm.deferred(sa.Column(st.JsonLongDictType(), nullable=True)) class WorkflowExecution(Execution): """Contains workflow execution information.""" __tablename__ = 'workflow_executions_v2' __table_args__ = ( sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), sa.Index('%s_state' % __tablename__, 'state'), sa.Index('%s_updated_at' % __tablename__, 'updated_at'), ) # Main properties. accepted = sa.Column(sa.Boolean(), default=False) input = sa.Column(st.JsonLongDictType(), nullable=True) output = sa.orm.deferred(sa.Column(st.JsonLongDictType(), nullable=True)) params = sa.Column(st.JsonLongDictType()) # Initial workflow context containing workflow variables, environment, # openstack security context etc. # NOTES: # * Data stored in this structure should not be copied into inbound # contexts of tasks. No need to duplicate it. # * This structure does not contain workflow input. context = sa.Column(st.JsonLongDictType()) class TaskExecution(Execution): """Contains task runtime information.""" __tablename__ = 'task_executions_v2' __table_args__ = ( sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), sa.Index('%s_state' % __tablename__, 'state'), sa.Index('%s_updated_at' % __tablename__, 'updated_at'), sa.UniqueConstraint('unique_key') ) # Main properties. action_spec = sa.Column(st.JsonLongDictType()) unique_key = sa.Column(sa.String(250), nullable=True) type = sa.Column(sa.String(10)) # Whether the task is fully processed (publishing and calculating commands # after it). It allows to simplify workflow controller implementations # significantly. processed = sa.Column(sa.BOOLEAN, default=False) # Data Flow properties. in_context = sa.Column(st.JsonLongDictType()) published = sa.Column(st.JsonLongDictType()) @property def executions(self): return ( self.action_executions if not self.spec.get('workflow') else self.workflow_executions ) for cls in utils.iter_subclasses(Execution): event.listen( # Catch and trim Execution.state_info to always fit allocated size. # Note that the limit is 65500 which is less than 65535 (2^16 -1). # The reason is that utils.cut() is not exactly accurate in case if # the value is not a string, but, for example, a dictionary. If we # limit it exactly to 65535 then once in a while it may go slightly # beyond the allowed maximum size. It may depend on the order of # keys in a string representation and other things that are hidden # inside utils.cut_dict() method. cls.state_info, 'set', lambda t, v, o, i: utils.cut(v, 65500), retval=True ) # Many-to-one for 'ActionExecution' and 'TaskExecution'. ActionExecution.task_execution_id = sa.Column( sa.String(36), sa.ForeignKey(TaskExecution.id, ondelete='CASCADE'), nullable=True ) TaskExecution.action_executions = relationship( ActionExecution, backref=backref('task_execution', remote_side=[TaskExecution.id]), cascade='all, delete-orphan', foreign_keys=ActionExecution.task_execution_id, lazy='select' ) sa.Index( '%s_task_execution_id' % ActionExecution.__tablename__, 'task_execution_id' ) # Many-to-one for 'WorkflowExecution' and 'TaskExecution'. WorkflowExecution.task_execution_id = sa.Column( sa.String(36), sa.ForeignKey(TaskExecution.id, ondelete='CASCADE'), nullable=True ) TaskExecution.workflow_executions = relationship( WorkflowExecution, backref=backref('task_execution', remote_side=[TaskExecution.id]), cascade='all, delete-orphan', foreign_keys=WorkflowExecution.task_execution_id, lazy='select' ) sa.Index( '%s_task_execution_id' % WorkflowExecution.__tablename__, 'task_execution_id' ) # Many-to-one for 'WorkflowExecution' and 'WorkflowExecution' WorkflowExecution.root_execution_id = sa.Column( sa.String(36), sa.ForeignKey(WorkflowExecution.id, ondelete='SET NULL'), nullable=True ) # Many-to-one for 'TaskExecution' and 'WorkflowExecution'. TaskExecution.workflow_execution_id = sa.Column( sa.String(36), sa.ForeignKey(WorkflowExecution.id, ondelete='CASCADE') ) WorkflowExecution.task_executions = relationship( TaskExecution, backref=backref('workflow_execution', remote_side=[WorkflowExecution.id]), cascade='all, delete-orphan', foreign_keys=TaskExecution.workflow_execution_id, lazy='select' ) sa.Index( '%s_workflow_execution_id' % TaskExecution.__tablename__, TaskExecution.workflow_execution_id ) sa.Index( '%s_workflow_execution_id_name' % TaskExecution.__tablename__, TaskExecution.workflow_execution_id, TaskExecution.name ) # Other objects. class DelayedCall(mb.MistralModelBase): """Contains info about delayed calls.""" __tablename__ = 'delayed_calls_v2' id = mb.id_column() factory_method_path = sa.Column(sa.String(200), nullable=True) target_method_name = sa.Column(sa.String(80), nullable=False) method_arguments = sa.Column(st.JsonDictType()) serializers = sa.Column(st.JsonDictType()) key = sa.Column(sa.String(250), nullable=True) auth_context = sa.Column(st.JsonDictType()) execution_time = sa.Column(sa.DateTime, nullable=False) processing = sa.Column(sa.Boolean, default=False, nullable=False) sa.Index( '%s_execution_time' % DelayedCall.__tablename__, DelayedCall.execution_time ) class Environment(mb.MistralSecureModelBase): """Contains environment variables for workflow execution.""" __tablename__ = 'environments_v2' __table_args__ = ( sa.UniqueConstraint('name', 'project_id'), sa.Index('%s_name' % __tablename__, 'name'), sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), ) # Main properties. id = mb.id_column() name = sa.Column(sa.String(200)) description = sa.Column(sa.Text()) variables = sa.Column(st.JsonLongDictType()) class CronTrigger(mb.MistralSecureModelBase): """Contains info about cron triggers.""" __tablename__ = 'cron_triggers_v2' __table_args__ = ( sa.UniqueConstraint('name', 'project_id'), sa.UniqueConstraint( 'workflow_input_hash', 'workflow_name', 'pattern', 'project_id', 'workflow_params_hash', 'remaining_executions', 'first_execution_time' ), sa.Index( '%s_next_execution_time' % __tablename__, 'next_execution_time' ), sa.Index('%s_project_id' % __tablename__, 'project_id'), sa.Index('%s_scope' % __tablename__, 'scope'), sa.Index('%s_workflow_name' % __tablename__, 'workflow_name'), ) id = mb.id_column() name = sa.Column(sa.String(200)) pattern = sa.Column( sa.String(100), nullable=True, default='0 0 30 2 0' # Set default to 'never'. ) first_execution_time = sa.Column(sa.DateTime, nullable=True) next_execution_time = sa.Column(sa.DateTime, nullable=False) workflow_name = sa.Column(sa.String(255)) remaining_executions = sa.Column(sa.Integer) workflow_id = sa.Column( sa.String(36), sa.ForeignKey(WorkflowDefinition.id) ) workflow = relationship('WorkflowDefinition', lazy='joined') workflow_params = sa.Column(st.JsonDictType()) workflow_params_hash = sa.Column( sa.CHAR(64), default=_get_hash_function_by('workflow_params') ) workflow_input = sa.Column(st.JsonDictType()) workflow_input_hash = sa.Column( sa.CHAR(64), default=_get_hash_function_by('workflow_input') ) trust_id = sa.Column(sa.String(80)) def to_dict(self): d = super(CronTrigger, self).to_dict() utils.datetime_to_str_in_dict(d, 'first_execution_time') utils.datetime_to_str_in_dict(d, 'next_execution_time') return d # Register all hooks related to secure models. mb.register_secure_model_hooks() # TODO(rakhmerov): This is a bad solution. It's hard to find in the code, # configure flexibly etc. Fix it. # Register an event listener to verify that the size of all the long columns # affected by the user do not exceed the limit configuration. for attr_name in ['input', 'output', 'params', 'published']: register_length_validator(attr_name) class ResourceMember(mb.MistralModelBase): """Contains info about resource members.""" __tablename__ = 'resource_members_v2' __table_args__ = ( sa.UniqueConstraint( 'resource_id', 'resource_type', 'member_id' ), ) id = mb.id_column() resource_id = sa.Column(sa.String(80), nullable=False) resource_type = sa.Column( sa.String(50), nullable=False, default='workflow' ) project_id = sa.Column(sa.String(80), default=security.get_project_id) member_id = sa.Column(sa.String(80), nullable=False) status = sa.Column(sa.String(20), nullable=False, default="pending") class EventTrigger(mb.MistralSecureModelBase): """Contains info about event triggers.""" __tablename__ = 'event_triggers_v2' __table_args__ = ( sa.UniqueConstraint('exchange', 'topic', 'event', 'workflow_id', 'project_id'), sa.Index('%s_project_id_workflow_id' % __tablename__, 'project_id', 'workflow_id'), ) id = mb.id_column() name = sa.Column(sa.String(200)) workflow_id = sa.Column( sa.String(36), sa.ForeignKey(WorkflowDefinition.id) ) workflow = relationship('WorkflowDefinition', lazy='joined') workflow_params = sa.Column(st.JsonDictType()) workflow_input = sa.Column(st.JsonDictType()) exchange = sa.Column(sa.String(80), nullable=False) topic = sa.Column(sa.String(80), nullable=False) event = sa.Column(sa.String(80), nullable=False) trust_id = sa.Column(sa.String(80)) class NamedLock(mb.MistralModelBase): """Contains info about named locks. Usage of named locks is based on properties of READ COMMITTED transactions of the most generally used SQL databases such as Postgres, MySQL, Oracle etc. The locking scenario is as follows: 1. Transaction A (TX-A) inserts a row with unique 'id' and some value that identifies a locked object stored in 'name'. 2. Transaction B (TX-B) and any subsequent transactions tries to insert a row with unique 'id' and the same value of 'name' field and it waits till TX-A is completed due to transactional properties of READ COMMITTED. 3. If TX-A then immediately deletes the record and commits then TX-B and or one of the subsequent transactions are released and its 'insert' is completed. 4. Then the scenario repeats with step #2 where the role of TX-A will be playing a transaction that just did insert. Practically, this table should never contain any committed rows. All its usage is around the play with transactional storages. """ __tablename__ = 'named_locks' sa.UniqueConstraint('name') id = mb.id_column() name = sa.Column(sa.String(250)) sa.UniqueConstraint(NamedLock.name) mistral-6.0.0/mistral/db/v2/sqlalchemy/api.py0000666000175100017510000012301113245513261021072 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import sys import threading from oslo_config import cfg from oslo_db import exception as db_exc from oslo_db import sqlalchemy as oslo_sqlalchemy from oslo_db.sqlalchemy import utils as db_utils from oslo_log import log as logging from oslo_utils import uuidutils # noqa import sqlalchemy as sa from sqlalchemy.ext.compiler import compiles from sqlalchemy.sql.expression import Insert from mistral import context from mistral.db.sqlalchemy import base as b from mistral.db.sqlalchemy import model_base as mb from mistral.db.sqlalchemy import sqlite_lock from mistral.db import utils as m_dbutils from mistral.db.v2.sqlalchemy import filters as db_filters from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.services import security from mistral import utils from mistral.workflow import states CONF = cfg.CONF LOG = logging.getLogger(__name__) _SCHEMA_LOCK = threading.RLock() _initialized = False def get_backend(): """Consumed by openstack common code. The backend is this module itself. :return: Name of db backend. """ return sys.modules[__name__] def setup_db(): global _initialized with _SCHEMA_LOCK: if _initialized: return try: models.Workbook.metadata.create_all(b.get_engine()) _initialized = True except sa.exc.OperationalError as e: raise exc.DBError("Failed to setup database: %s" % e) def drop_db(): global _initialized with _SCHEMA_LOCK: if not _initialized: return try: models.Workbook.metadata.drop_all(b.get_engine()) _initialized = False except Exception as e: raise exc.DBError("Failed to drop database: %s" % e) # Transaction management. def start_tx(): b.start_tx() def commit_tx(): b.commit_tx() def rollback_tx(): b.rollback_tx() def end_tx(): b.end_tx() @contextlib.contextmanager def transaction(read_only=False): start_tx() try: yield if read_only: rollback_tx() else: commit_tx() finally: end_tx() @b.session_aware() def refresh(model, session=None): session.refresh(model) @b.session_aware() def acquire_lock(model, id, session=None): # Expire all so all objects queried after lock is acquired # will be up-to-date from the DB and not from cache. session.expire_all() if b.get_driver_name() == 'sqlite': # In case of 'sqlite' we need to apply a manual lock. sqlite_lock.acquire_lock(id, session) return _lock_entity(model, id) def _lock_entity(model, id): # Get entity by ID in "FOR UPDATE" mode and expect exactly one object. return _secure_query(model).with_for_update().filter(model.id == id).one() @b.session_aware() def update_on_match(id, specimen, values, session=None): """Updates a model with the given values if it matches the given specimen. :param id: ID of a persistent model. :param specimen: Specimen used to match the :param values: Values to set to the model if fields of the object match the specimen. :param session: Session. :return: Persistent object attached to the session. """ assert id is not None assert specimen is not None # We need to flush the session because when we do update_on_match() # it doesn't always update the state of the persistent object properly # when it merges a specimen state into it. Some fields get wiped out from # the history of ORM events that must be flushed later. For example, it # doesn't work well in case of Postgres. # See https://bugs.launchpad.net/mistral/+bug/1736821 session.flush() model = None model_class = type(specimen) # Use WHERE clause to exclude possible conflicts if the state has # already been changed. try: model = b.model_query(model_class).update_on_match( specimen=specimen, surrogate_key='id', values=values ) except oslo_sqlalchemy.update_match.NoRowsMatched: LOG.info( "Can't change state of persistent object " "because it has already been changed. [model_class=%, id=%s, " "specimen=%s, values=%s]", model_class, id, specimen, values ) return model def _secure_query(model, *columns): query = b.model_query(model, columns) if not issubclass(model, mb.MistralSecureModelBase): return query shared_res_ids = [] res_type = RESOURCE_MAPPING.get(model, '') if res_type: shared_res = _get_accepted_resources(res_type) shared_res_ids = [res.resource_id for res in shared_res] query_criterion = sa.or_( model.project_id == security.get_project_id(), model.scope == 'public' ) # NOTE(kong): Include IN_ predicate in query filter only if shared_res_ids # is not empty to avoid sqlalchemy SAWarning and wasting a db call. if shared_res_ids: query_criterion = sa.or_( query_criterion, model.id.in_(shared_res_ids) ) query = query.filter(query_criterion) return query def _paginate_query(model, limit=None, marker=None, sort_keys=None, sort_dirs=None, query=None): if not query: query = _secure_query(model) sort_keys = sort_keys if sort_keys else [] # We should add sorting by id only if we use pagination or when # there is no specified ordering criteria. Otherwise # we can omit it to increase the performance. if not sort_keys or (marker or limit) and 'id' not in sort_keys: sort_keys.append('id') sort_dirs.append('asc') if sort_dirs else None query = db_utils.paginate_query( query, model, limit, sort_keys, marker=marker, sort_dirs=sort_dirs ) return query def _delete_all(model, **kwargs): # NOTE(kong): Because we use 'in_' operator in _secure_query(), delete() # method will raise error with default parameter. Please refer to # http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html#sqlalchemy.orm.query.Query.delete _secure_query(model).filter_by(**kwargs).delete(synchronize_session=False) def _get_collection(model, insecure=False, limit=None, marker=None, sort_keys=None, sort_dirs=None, fields=None, **filters): columns = ( tuple([getattr(model, f) for f in fields if hasattr(model, f)]) if fields else () ) query = (b.model_query(model, *columns) if insecure else _secure_query(model, *columns)) query = db_filters.apply_filters(query, model, **filters) query = _paginate_query( model, limit, marker, sort_keys, sort_dirs, query ) return query.all() def _get_db_object_by_name(model, name, filter_=None, order_by=None): query = _secure_query(model) final_filter = model.name == name if filter_ is not None: final_filter = sa.and_(final_filter, filter_) if order_by is not None: query = query.order_by(order_by) return query.filter(final_filter).first() def _get_db_object_by_id(model, id, insecure=False): query = b.model_query(model) if insecure else _secure_query(model) return query.filter_by(id=id).first() def _get_db_object_by_name_and_namespace_or_id(model, identifier, namespace=None, insecure=False): query = b.model_query(model) if insecure else _secure_query(model) match_name = model.name == identifier if namespace is not None: match_name = sa.and_(match_name, model.namespace == namespace) match_id = model.id == identifier query = query.filter( sa.or_( match_id, match_name ) ) return query.first() @compiles(Insert) def append_string(insert, compiler, **kw): s = compiler.visit_insert(insert, **kw) if 'append_string' in insert.kwargs: append = insert.kwargs['append_string'] if append: s += " " + append if 'replace_string' in insert.kwargs: replace = insert.kwargs['replace_string'] if isinstance(replace, tuple): s = s.replace(replace[0], replace[1]) return s # Workbook definitions. @b.session_aware() def get_workbook(name, session=None): wb = _get_db_object_by_name(models.Workbook, name) if not wb: raise exc.DBEntityNotFoundError( "Workbook not found [workbook_name=%s]" % name ) return wb @b.session_aware() def load_workbook(name, session=None): return _get_db_object_by_name(models.Workbook, name) @b.session_aware() def get_workbooks(session=None, **kwargs): return _get_collection(models.Workbook, **kwargs) @b.session_aware() def create_workbook(values, session=None): wb = models.Workbook() wb.update(values.copy()) try: wb.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for WorkbookDefinition: %s" % e.columns ) return wb @b.session_aware() def update_workbook(name, values, session=None): wb = get_workbook(name) wb.update(values.copy()) return wb @b.session_aware() def create_or_update_workbook(name, values, session=None): if not _get_db_object_by_name(models.Workbook, name): return create_workbook(values) else: return update_workbook(name, values) @b.session_aware() def delete_workbook(name, session=None): count = _secure_query(models.Workbook).filter( models.Workbook.name == name).delete() if count == 0: raise exc.DBEntityNotFoundError( "Workbook not found [workbook_name=%s]" % name ) @b.session_aware() def delete_workbooks(session=None, **kwargs): return _delete_all(models.Workbook, **kwargs) # Workflow definitions. @b.session_aware() def get_workflow_definition(identifier, namespace='', session=None): """Gets workflow definition by name or uuid. :param identifier: Identifier could be in the format of plain string or uuid. :param namespace: The namespace the workflow is in. Optional. :return: Workflow definition. """ ctx = context.ctx() wf_def = _get_db_object_by_name_and_namespace_or_id( models.WorkflowDefinition, identifier, namespace=namespace, insecure=ctx.is_admin ) if not wf_def: raise exc.DBEntityNotFoundError( "Workflow not found [workflow_identifier=%s, namespace=%s]" % (identifier, namespace) ) return wf_def @b.session_aware() def get_workflow_definition_by_id(id, session=None): wf_def = _get_db_object_by_id(models.WorkflowDefinition, id) if not wf_def: raise exc.DBEntityNotFoundError( "Workflow not found [workflow_id=%s]" % id ) return wf_def @b.session_aware() def load_workflow_definition(name, namespace='', session=None): model = models.WorkflowDefinition filter_ = model.namespace.in_([namespace, '']) # Give priority to objects not in the default namespace. order_by = model.namespace.desc() return _get_db_object_by_name( model, name, filter_, order_by ) @b.session_aware() def get_workflow_definitions(fields=None, session=None, **kwargs): if fields and 'input' in fields: fields.remove('input') fields.append('spec') return _get_collection( model=models.WorkflowDefinition, fields=fields, **kwargs ) @b.session_aware() def create_workflow_definition(values, session=None): wf_def = models.WorkflowDefinition() wf_def.update(values.copy()) try: wf_def.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for WorkflowDefinition: %s" % e.columns ) return wf_def @b.session_aware() def update_workflow_definition(identifier, values, namespace='', session=None): wf_def = get_workflow_definition(identifier, namespace=namespace) m_dbutils.check_db_obj_access(wf_def) if wf_def.scope == 'public' and values['scope'] == 'private': # Check cron triggers. cron_triggers = get_cron_triggers(insecure=True, workflow_id=wf_def.id) for c_t in cron_triggers: if c_t.project_id != wf_def.project_id: raise exc.NotAllowedException( "Can not update scope of workflow that has cron triggers " "associated in other tenants. [workflow_identifier=%s]" % identifier ) # Check event triggers. event_triggers = get_event_triggers( insecure=True, workflow_id=wf_def.id ) for e_t in event_triggers: if e_t.project_id != wf_def.project_id: raise exc.NotAllowedException( "Can not update scope of workflow that has event triggers " "associated in other tenants. [workflow_identifier=%s]" % identifier ) wf_def.update(values.copy()) return wf_def @b.session_aware() def create_or_update_workflow_definition(name, values, session=None): if not _get_db_object_by_name(models.WorkflowDefinition, name): return create_workflow_definition(values) else: return update_workflow_definition(name, values) @b.session_aware() def delete_workflow_definition(identifier, namespace='', session=None): wf_def = get_workflow_definition(identifier, namespace) m_dbutils.check_db_obj_access(wf_def) cron_triggers = get_cron_triggers(insecure=True, workflow_id=wf_def.id) if cron_triggers: raise exc.DBError( "Can't delete workflow that has cron triggers associated. " "[workflow_identifier=%s], [cron_trigger_id(s)=%s]" % (identifier, ', '.join([t.id for t in cron_triggers])) ) event_triggers = get_event_triggers(insecure=True, workflow_id=wf_def.id) if event_triggers: raise exc.DBError( "Can't delete workflow that has event triggers associated. " "[workflow_identifier=%s], [event_trigger_id(s)=%s]" % (identifier, ', '.join([t.id for t in event_triggers])) ) # Delete workflow members first. delete_resource_members(resource_type='workflow', resource_id=wf_def.id) session.delete(wf_def) @b.session_aware() def delete_workflow_definitions(session=None, **kwargs): return _delete_all(models.WorkflowDefinition, **kwargs) # Action definitions. @b.session_aware() def get_action_definition_by_id(id, session=None): action_def = _get_db_object_by_id(models.ActionDefinition, id) if not action_def: raise exc.DBEntityNotFoundError( "Action not found [action_id=%s]" % id ) return action_def @b.session_aware() def get_action_definition(identifier, session=None): a_def = _get_db_object_by_name_and_namespace_or_id( models.ActionDefinition, identifier ) if not a_def: raise exc.DBEntityNotFoundError( "Action definition not found [action_name=%s]" % identifier ) return a_def @b.session_aware() def load_action_definition(name, session=None): return _get_db_object_by_name(models.ActionDefinition, name) @b.session_aware() def get_action_definitions(session=None, **kwargs): return _get_collection(model=models.ActionDefinition, **kwargs) @b.session_aware() def create_action_definition(values, session=None): a_def = models.ActionDefinition() a_def.update(values) try: a_def.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for action %s: %s" % (a_def.name, e.columns) ) return a_def @b.session_aware() def update_action_definition(identifier, values, session=None): a_def = get_action_definition(identifier) a_def.update(values.copy()) return a_def @b.session_aware() def create_or_update_action_definition(name, values, session=None): if not _get_db_object_by_name(models.ActionDefinition, name): return create_action_definition(values) else: return update_action_definition(name, values) @b.session_aware() def delete_action_definition(identifier, session=None): a_def = get_action_definition(identifier) session.delete(a_def) @b.session_aware() def delete_action_definitions(session=None, **kwargs): return _delete_all(models.ActionDefinition, **kwargs) # Action executions. @b.session_aware() def get_action_execution(id, session=None): a_ex = _get_db_object_by_id(models.ActionExecution, id) if not a_ex: raise exc.DBEntityNotFoundError( "ActionExecution not found [id=%s]" % id ) return a_ex @b.session_aware() def load_action_execution(id, session=None): return _get_db_object_by_id(models.ActionExecution, id) @b.session_aware() def get_action_executions(session=None, **kwargs): return _get_action_executions(**kwargs) @b.session_aware() def create_action_execution(values, session=None): a_ex = models.ActionExecution() a_ex.update(values.copy()) try: a_ex.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for ActionExecution: %s" % e.columns ) return a_ex @b.session_aware() def update_action_execution(id, values, session=None): a_ex = get_action_execution(id) a_ex.update(values.copy()) return a_ex @b.session_aware() def create_or_update_action_execution(id, values, session=None): if not _get_db_object_by_id(models.ActionExecution, id): return create_action_execution(values) else: return update_action_execution(id, values) @b.session_aware() def delete_action_execution(id, session=None): count = _secure_query(models.ActionExecution).filter( models.ActionExecution.id == id).delete() if count == 0: raise exc.DBEntityNotFoundError( "ActionExecution not found [id=%s]" % id ) @b.session_aware() def delete_action_executions(session=None, **kwargs): return _delete_all(models.ActionExecution, **kwargs) def _get_action_executions(**kwargs): return _get_collection(models.ActionExecution, **kwargs) # Workflow executions. @b.session_aware() def get_workflow_execution(id, session=None): ctx = context.ctx() wf_ex = _get_db_object_by_id( models.WorkflowExecution, id, insecure=ctx.is_admin ) if not wf_ex: raise exc.DBEntityNotFoundError( "WorkflowExecution not found [id=%s]" % id ) return wf_ex @b.session_aware() def load_workflow_execution(id, session=None): return _get_db_object_by_id(models.WorkflowExecution, id) @b.session_aware() def get_workflow_executions(session=None, **kwargs): return _get_collection(models.WorkflowExecution, **kwargs) @b.session_aware() def create_workflow_execution(values, session=None): wf_ex = models.WorkflowExecution() wf_ex.update(values.copy()) try: wf_ex.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for WorkflowExecution with ID {value} ".format( value=e.value ) ) return wf_ex @b.session_aware() def update_workflow_execution(id, values, session=None): wf_ex = get_workflow_execution(id) m_dbutils.check_db_obj_access(wf_ex) wf_ex.update(values.copy()) return wf_ex @b.session_aware() def create_or_update_workflow_execution(id, values, session=None): if not _get_db_object_by_id(models.WorkflowExecution, id): return create_workflow_execution(values) else: return update_workflow_execution(id, values) @b.session_aware() def delete_workflow_execution(id, session=None): model = models.WorkflowExecution insecure = context.ctx().is_admin query = b.model_query(model) if insecure else _secure_query(model) count = query.filter( models.WorkflowExecution.id == id).delete() if count == 0: raise exc.DBEntityNotFoundError( "WorkflowExecution not found [id=%s]" % id ) @b.session_aware() def delete_workflow_executions(session=None, **kwargs): return _delete_all(models.WorkflowExecution, **kwargs) def update_workflow_execution_state(id, cur_state, state): specimen = models.WorkflowExecution(id=id, state=cur_state) return update_on_match(id, specimen, {'state': state}) # Tasks executions. @b.session_aware() def get_task_execution(id, session=None): task_ex = _get_db_object_by_id(models.TaskExecution, id) if not task_ex: raise exc.DBEntityNotFoundError( "Task execution not found [id=%s]" % id ) return task_ex @b.session_aware() def load_task_execution(id, session=None): return _get_db_object_by_id(models.TaskExecution, id) @b.session_aware() def get_task_executions(session=None, **kwargs): return _get_collection(models.TaskExecution, **kwargs) def _get_completed_task_executions_query(kwargs): query = b.model_query(models.TaskExecution) query = query.filter_by(**kwargs) query = query.filter( models.TaskExecution.state.in_( [states.ERROR, states.CANCELLED, states.SUCCESS] ) ) return query @b.session_aware() def get_completed_task_executions(session=None, **kwargs): query = _get_completed_task_executions_query(kwargs) return query.all() def _get_incomplete_task_executions_query(kwargs): query = b.model_query(models.TaskExecution) query = query.filter_by(**kwargs) query = query.filter( models.TaskExecution.state.in_( [states.IDLE, states.RUNNING, states.WAITING, states.RUNNING_DELAYED, states.PAUSED] ) ) return query @b.session_aware() def get_incomplete_task_executions(session=None, **kwargs): query = _get_incomplete_task_executions_query(kwargs) return query.all() @b.session_aware() def get_incomplete_task_executions_count(session=None, **kwargs): query = _get_incomplete_task_executions_query(kwargs) return query.count() @b.session_aware() def create_task_execution(values, session=None): task_ex = models.TaskExecution() task_ex.update(values) try: task_ex.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for TaskExecution: %s" % e.columns ) return task_ex @b.session_aware() def update_task_execution(id, values, session=None): task_ex = get_task_execution(id) task_ex.update(values.copy()) return task_ex @b.session_aware() def create_or_update_task_execution(id, values, session=None): if not _get_db_object_by_id(models.TaskExecution, id): return create_task_execution(values) else: return update_task_execution(id, values) @b.session_aware() def delete_task_execution(id, session=None): count = _secure_query(models.TaskExecution).filter( models.TaskExecution.id == id).delete() if count == 0: raise exc.DBEntityNotFoundError( "Task execution not found [id=%s]" % id ) @b.session_aware() def delete_task_executions(session=None, **kwargs): return _delete_all(models.TaskExecution, **kwargs) def update_task_execution_state(id, cur_state, state): specimen = models.TaskExecution(id=id, state=cur_state) return update_on_match(id, specimen, {'state': state}) # Delayed calls. @b.session_aware() def create_delayed_call(values, session=None): delayed_call = models.DelayedCall() delayed_call.update(values.copy()) try: delayed_call.save(session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for DelayedCall: %s" % e.columns ) return delayed_call @b.session_aware() def delete_delayed_call(id, session=None): # It's safe to use insecure query here because users can't access # delayed calls. count = b.model_query(models.DelayedCall).filter( models.DelayedCall.id == id).delete() if count == 0: raise exc.DBEntityNotFoundError( "Delayed Call not found [id=%s]" % id ) @b.session_aware() def get_delayed_calls_to_start(time, batch_size=None, session=None): query = b.model_query(models.DelayedCall) query = query.filter(models.DelayedCall.execution_time < time) query = query.filter_by(processing=False) query = query.order_by(models.DelayedCall.execution_time) query = query.limit(batch_size) return query.all() @b.session_aware() def update_delayed_call(id, values, query_filter=None, session=None): if query_filter: try: specimen = models.DelayedCall(id=id, **query_filter) delayed_call = b.model_query( models.DelayedCall).update_on_match(specimen=specimen, surrogate_key='id', values=values) return delayed_call, 1 except oslo_sqlalchemy.update_match.NoRowsMatched as e: LOG.debug( "No rows matched for update call [id=%s, values=%s, " "query_filter=%s," "exception=%s]", id, values, query_filter, e ) return None, 0 else: delayed_call = get_delayed_call(id=id, session=session) delayed_call.update(values) return delayed_call, len(session.dirty) @b.session_aware() def get_delayed_call(id, session=None): delayed_call = _get_db_object_by_id(models.DelayedCall, id) if not delayed_call: raise exc.DBEntityNotFoundError( "Delayed Call not found [id=%s]" % id ) return delayed_call @b.session_aware() def get_delayed_calls(session=None, **kwargs): return _get_collection(model=models.DelayedCall, **kwargs) @b.session_aware() def delete_delayed_calls(session=None, **kwargs): return _delete_all(models.DelayedCall, **kwargs) @b.session_aware() def get_expired_executions(expiration_time, limit=None, columns=(), session=None): query = _get_completed_root_executions_query(columns) query = query.filter(models.WorkflowExecution.updated_at < expiration_time) if limit: query = query.limit(limit) return query.all() @b.session_aware() def get_superfluous_executions(max_finished_executions, limit=None, columns=(), session=None): if not max_finished_executions: return [] query = _get_completed_root_executions_query(columns) query = query.order_by(models.WorkflowExecution.updated_at.desc()) query = query.offset(max_finished_executions) if limit: query = query.limit(limit) return query.all() def _get_completed_root_executions_query(columns): query = b.model_query(models.WorkflowExecution, columns=columns) # Only WorkflowExecution that are not a child of other WorkflowExecution. query = query.filter(models.WorkflowExecution. task_execution_id == sa.null()) query = query.filter( models.WorkflowExecution.state.in_( [states.SUCCESS, states.ERROR, states.CANCELLED] ) ) return query @b.session_aware() def get_cron_trigger(identifier, session=None): ctx = context.ctx() cron_trigger = _get_db_object_by_name_and_namespace_or_id( models.CronTrigger, identifier, insecure=ctx.is_admin ) if not cron_trigger: raise exc.DBEntityNotFoundError( "Cron trigger not found [identifier=%s]" % identifier ) return cron_trigger @b.session_aware() def get_cron_trigger_by_id(id, session=None): ctx = context.ctx() cron_trigger = _get_db_object_by_id(models.CronTrigger, id, insecure=ctx.is_admin) if not cron_trigger: raise exc.DBEntityNotFoundError( "Cron trigger not found [id=%s]" % id ) return cron_trigger @b.session_aware() def load_cron_trigger(identifier, session=None): return _get_db_object_by_name_and_namespace_or_id( models.CronTrigger, identifier ) @b.session_aware() def get_cron_triggers(session=None, **kwargs): return _get_collection(models.CronTrigger, **kwargs) @b.session_aware() def get_next_cron_triggers(time, session=None): query = b.model_query(models.CronTrigger) query = query.filter(models.CronTrigger.next_execution_time < time) query = query.order_by(models.CronTrigger.next_execution_time) return query.all() @b.session_aware() def create_cron_trigger(values, session=None): cron_trigger = models.CronTrigger() cron_trigger.update(values) try: cron_trigger.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for cron trigger %s: %s" % (cron_trigger.name, e.columns) ) # TODO(nmakhotkin): Remove this 'except' after fixing # https://bugs.launchpad.net/oslo.db/+bug/1458583. except db_exc.DBError as e: raise exc.DBDuplicateEntryError( "Duplicate entry for cron trigger: %s" % e ) return cron_trigger @b.session_aware() def update_cron_trigger(identifier, values, session=None, query_filter=None): cron_trigger = get_cron_trigger(identifier) if query_filter: try: # Execute the UPDATE statement with the query_filter as the WHERE. specimen = models.CronTrigger(id=cron_trigger.id, **query_filter) query = b.model_query(models.CronTrigger) cron_trigger = query.update_on_match( specimen=specimen, surrogate_key='id', values=values ) return cron_trigger, 1 except oslo_sqlalchemy.update_match.NoRowsMatched: LOG.debug( "No rows matched for cron update call" "[id=%s, values=%s, query_filter=%s", id, values, query_filter ) return cron_trigger, 0 else: cron_trigger.update(values.copy()) return cron_trigger, len(session.dirty) @b.session_aware() def create_or_update_cron_trigger(identifier, values, session=None): cron_trigger = _get_db_object_by_name_and_namespace_or_id( models.CronTrigger, identifier ) if not cron_trigger: return create_cron_trigger(values) else: updated, _ = update_cron_trigger(identifier, values) return updated @b.session_aware() def delete_cron_trigger(identifier, session=None): cron_trigger = get_cron_trigger(identifier) m_dbutils.check_db_obj_access(cron_trigger) # Delete the cron trigger by ID and get the affected row count. table = models.CronTrigger.__table__ result = session.execute( table.delete().where(table.c.id == cron_trigger.id) ) return result.rowcount @b.session_aware() def delete_cron_triggers(session=None, **kwargs): return _delete_all(models.CronTrigger, **kwargs) # Environments. @b.session_aware() def get_environment(name, session=None): env = _get_db_object_by_name(models.Environment, name) if not env: raise exc.DBEntityNotFoundError( "Environment not found [name=%s]" % name ) return env @b.session_aware() def load_environment(name, session=None): return _get_db_object_by_name(models.Environment, name) @b.session_aware() def get_environments(session=None, **kwargs): return _get_collection(models.Environment, **kwargs) @b.session_aware() def create_environment(values, session=None): env = models.Environment() env.update(values) try: env.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for Environment: %s" % e.columns ) return env @b.session_aware() def update_environment(name, values, session=None): env = get_environment(name) env.update(values) return env @b.session_aware() def create_or_update_environment(name, values, session=None): env = _get_db_object_by_name(models.Environment, name) if not env: return create_environment(values) else: return update_environment(name, values) @b.session_aware() def delete_environment(name, session=None): count = _secure_query(models.Environment).filter( models.Environment.name == name).delete() if count == 0: raise exc.DBEntityNotFoundError( "Environment not found [name=%s]" % name ) @b.session_aware() def delete_environments(session=None, **kwargs): return _delete_all(models.Environment, **kwargs) # Resource members. RESOURCE_MAPPING = { models.WorkflowDefinition: 'workflow', models.Workbook: 'workbook' } def _get_criterion(resource_id, member_id=None, is_owner=True): """Generates criterion for querying resource_member_v2 table.""" # Resource owner query resource membership with member_id. if is_owner and member_id: return sa.and_( models.ResourceMember.project_id == security.get_project_id(), models.ResourceMember.resource_id == resource_id, models.ResourceMember.member_id == member_id ) # Resource owner query resource memberships. elif is_owner and not member_id: return sa.and_( models.ResourceMember.project_id == security.get_project_id(), models.ResourceMember.resource_id == resource_id, ) # Other members query other resource membership. elif not is_owner and member_id and member_id != security.get_project_id(): return None # Resource member query resource memberships. return sa.and_( models.ResourceMember.member_id == security.get_project_id(), models.ResourceMember.resource_id == resource_id ) @b.session_aware() def create_resource_member(values, session=None): res_member = models.ResourceMember() res_member.update(values.copy()) try: res_member.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for ResourceMember: %s" % e.columns ) return res_member @b.session_aware() def get_resource_member(resource_id, res_type, member_id, session=None): query = _secure_query(models.ResourceMember).filter_by( resource_type=res_type ) # Both resource owner and resource member can do query. res_member = query.filter( sa.or_( _get_criterion(resource_id, member_id), _get_criterion(resource_id, member_id, is_owner=False) ) ).first() if not res_member: raise exc.DBEntityNotFoundError( "Resource member not found [resource_id=%s, member_id=%s]" % (resource_id, member_id) ) return res_member @b.session_aware() def get_resource_members(resource_id, res_type, session=None): query = _secure_query(models.ResourceMember).filter_by( resource_type=res_type ) # Both resource owner and resource member can do query. res_members = query.filter( sa.or_( _get_criterion(resource_id), _get_criterion(resource_id, is_owner=False), ) ).all() return res_members @b.session_aware() def update_resource_member(resource_id, res_type, member_id, values, session=None): # Only member who is not the owner of the resource can update the # membership status. if member_id != security.get_project_id(): raise exc.DBEntityNotFoundError( "Resource member not found [resource_id=%s, member_id=%s]" % (resource_id, member_id) ) query = _secure_query(models.ResourceMember).filter_by( resource_type=res_type ) res_member = query.filter( _get_criterion(resource_id, member_id, is_owner=False) ).first() if not res_member: raise exc.DBEntityNotFoundError( "Resource member not found [resource_id=%s, member_id=%s]" % (resource_id, member_id) ) res_member.update(values.copy()) return res_member @b.session_aware() def delete_resource_member(resource_id, res_type, member_id, session=None): query = _secure_query(models.ResourceMember).\ filter_by(resource_type=res_type).\ filter(_get_criterion(resource_id, member_id)) # TODO(kong): Check association with cron triggers when deleting a workflow # member which is in 'accepted' status. count = query.delete() if count == 0: raise exc.DBEntityNotFoundError( "Resource member not found [resource_id=%s, member_id=%s]" % (resource_id, member_id) ) @b.session_aware() def delete_resource_members(session=None, **kwargs): return _delete_all(models.ResourceMember, **kwargs) def _get_accepted_resources(res_type): resources = _secure_query(models.ResourceMember).filter( sa.and_( models.ResourceMember.resource_type == res_type, models.ResourceMember.status == 'accepted', models.ResourceMember.member_id == security.get_project_id() ) ).all() return resources # Event triggers. @b.session_aware() def get_event_trigger(id, insecure=False, session=None): event_trigger = _get_db_object_by_id(models.EventTrigger, id, insecure) if not event_trigger: raise exc.DBEntityNotFoundError( "Event trigger not found [id=%s]." % id ) return event_trigger @b.session_aware() def load_event_trigger(id, insecure=False, session=None): return _get_db_object_by_id(models.EventTrigger, id, insecure) @b.session_aware() def get_event_triggers(session=None, **kwargs): return _get_collection(model=models.EventTrigger, **kwargs) @b.session_aware() def create_event_trigger(values, session=None): event_trigger = models.EventTrigger() event_trigger.update(values) try: event_trigger.save(session=session) except db_exc.DBDuplicateEntry as e: raise exc.DBDuplicateEntryError( "Duplicate entry for event trigger %s: %s" % (event_trigger.id, e.columns) ) # TODO(nmakhotkin): Remove this 'except' after fixing # https://bugs.launchpad.net/oslo.db/+bug/1458583. except db_exc.DBError as e: raise exc.DBDuplicateEntryError( "Duplicate entry for event trigger: %s" % e ) return event_trigger @b.session_aware() def update_event_trigger(id, values, session=None): event_trigger = get_event_trigger(id) event_trigger.update(values.copy()) return event_trigger @b.session_aware() def delete_event_trigger(id, session=None): # It's safe to use insecure query here because users can't access # delayed calls. count = b.model_query(models.EventTrigger).filter( models.EventTrigger.id == id).delete() if count == 0: raise exc.DBEntityNotFoundError( "Event trigger not found [id=%s]." % id ) @b.session_aware() def delete_event_triggers(session=None, **kwargs): return _delete_all(models.EventTrigger, **kwargs) # Locks. @b.session_aware() def create_named_lock(name, session=None): # This method has to work not through SQLAlchemy session because # session may not immediately issue an SQL query to a database # and instead just schedule it whereas we need to make sure to # issue a query immediately. session.flush() insert = models.NamedLock.__table__.insert() lock_id = utils.generate_unicode_uuid() session.execute(insert.values(id=lock_id, name=name)) session.flush() return lock_id @b.session_aware() def get_named_locks(session=None, **kwargs): return _get_collection(models.NamedLock, **kwargs) @b.session_aware() def delete_named_lock(lock_id, session=None): # This method has to work without SQLAlchemy session because # session may not immediately issue an SQL query to a database # and instead just schedule it whereas we need to make sure to # issue a query immediately. session.flush() table = models.NamedLock.__table__ delete = table.delete() session.execute(delete.where(table.c.id == lock_id)) session.flush() @contextlib.contextmanager def named_lock(name): # NOTE(rakhmerov): We can't use the well-known try-finally pattern here # because if lock creation failed then it means that the SQLAlchemy # session is no longer valid and we can't use to try to delete the lock. # All we can do here is to let the exception bubble up so that the # transaction management code could rollback the transaction. lock_id = create_named_lock(name) yield delete_named_lock(lock_id) mistral-6.0.0/mistral/db/v2/sqlalchemy/__init__.py0000666000175100017510000000000013245513261022050 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/v2/sqlalchemy/filters.py0000666000175100017510000000466213245513261022003 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy as sa def apply_filters(query, model, **filters): filter_dict = {} for key, value in filters.items(): column_attr = getattr(model, key) if isinstance(value, dict): if 'in' in value: query = query.filter(column_attr.in_(value['in'])) elif 'nin' in value: query = query.filter(~column_attr.in_(value['nin'])) elif 'neq' in value: query = query.filter(column_attr != value['neq']) elif 'gt' in value: query = query.filter(column_attr > value['gt']) elif 'gte' in value: query = query.filter(column_attr >= value['gte']) elif 'lt' in value: query = query.filter(column_attr < value['lt']) elif 'lte' in value: query = query.filter(column_attr <= value['lte']) elif 'eq' in value: query = query.filter(column_attr == value['eq']) elif 'has' in value: like_pattern = '%{0}%'.format(value['has']) query = query.filter(column_attr.like(like_pattern)) else: filter_dict[key] = value # We need to handle tag case seprately. As tag datatype is MutableList. # TODO(hparekh): Need to think how can we get rid of this. tags = filters.pop('tags', None) # To match the tag list, a resource must contain at least all of the # tags present in the filter parameter. if tags: tag_attr = getattr(model, 'tags') if not isinstance(tags, list): expr = tag_attr.contains(tags) else: expr = sa.and_(*[tag_attr.contains(tag) for tag in tags]) query = query.filter(expr) if filter_dict: query = query.filter_by(**filter_dict) return query mistral-6.0.0/mistral/db/v2/api.py0000666000175100017510000003146313245513261016741 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib from oslo_db import api as db_api _BACKEND_MAPPING = { 'sqlalchemy': 'mistral.db.v2.sqlalchemy.api', } IMPL = db_api.DBAPI('sqlalchemy', backend_mapping=_BACKEND_MAPPING) def setup_db(): IMPL.setup_db() def drop_db(): IMPL.drop_db() # Transaction control. def start_tx(): IMPL.start_tx() def commit_tx(): IMPL.commit_tx() def rollback_tx(): IMPL.rollback_tx() def end_tx(): IMPL.end_tx() @contextlib.contextmanager def transaction(read_only=False): with IMPL.transaction(read_only): yield def refresh(model): IMPL.refresh(model) # Locking. def acquire_lock(model, id): return IMPL.acquire_lock(model, id) # Workbooks. def get_workbook(name): return IMPL.get_workbook(name) def load_workbook(name): """Unlike get_workbook this method is allowed to return None.""" return IMPL.load_workbook(name) def get_workbooks(limit=None, marker=None, sort_keys=None, sort_dirs=None, fields=None, **kwargs): return IMPL.get_workbooks( limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **kwargs ) def create_workbook(values): return IMPL.create_workbook(values) def update_workbook(name, values): return IMPL.update_workbook(name, values) def create_or_update_workbook(name, values): return IMPL.create_or_update_workbook(name, values) def delete_workbook(name): IMPL.delete_workbook(name) def delete_workbooks(**kwargs): IMPL.delete_workbooks(**kwargs) # Workflow definitions. def get_workflow_definition(identifier, namespace=''): return IMPL.get_workflow_definition(identifier, namespace=namespace) def get_workflow_definition_by_id(id): return IMPL.get_workflow_definition_by_id(id) def load_workflow_definition(name, namespace=''): """Unlike get_workflow_definition this method is allowed to return None.""" return IMPL.load_workflow_definition(name, namespace) def get_workflow_definitions(limit=None, marker=None, sort_keys=None, sort_dirs=None, fields=None, **kwargs): return IMPL.get_workflow_definitions( limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **kwargs ) def create_workflow_definition(values): return IMPL.create_workflow_definition(values) def update_workflow_definition(identifier, values, namespace): return IMPL.update_workflow_definition(identifier, values, namespace) def create_or_update_workflow_definition(name, values): return IMPL.create_or_update_workflow_definition(name, values) def delete_workflow_definition(identifier, namespace=''): IMPL.delete_workflow_definition(identifier, namespace) def delete_workflow_definitions(**kwargs): IMPL.delete_workflow_definitions(**kwargs) # Action definitions. def get_action_definition_by_id(id): return IMPL.get_action_definition_by_id(id) def get_action_definition(name): return IMPL.get_action_definition(name) def load_action_definition(name): """Unlike get_action_definition this method is allowed to return None.""" return IMPL.load_action_definition(name) def get_action_definitions(limit=None, marker=None, sort_keys=None, sort_dirs=None, **kwargs): return IMPL.get_action_definitions( limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs, **kwargs ) def create_action_definition(values): return IMPL.create_action_definition(values) def update_action_definition(identifier, values): return IMPL.update_action_definition(identifier, values) def create_or_update_action_definition(name, values): return IMPL.create_or_update_action_definition(name, values) def delete_action_definition(name): return IMPL.delete_action_definition(name) def delete_action_definitions(**kwargs): return IMPL.delete_action_definitions(**kwargs) # Action executions. def get_action_execution(id): return IMPL.get_action_execution(id) def load_action_execution(name): """Unlike get_action_execution this method is allowed to return None.""" return IMPL.load_action_execution(name) def get_action_executions(**kwargs): return IMPL.get_action_executions(**kwargs) def create_action_execution(values): return IMPL.create_action_execution(values) def update_action_execution(id, values): return IMPL.update_action_execution(id, values) def create_or_update_action_execution(id, values): return IMPL.create_or_update_action_execution(id, values) def delete_action_execution(id): return IMPL.delete_action_execution(id) def delete_action_executions(**kwargs): IMPL.delete_action_executions(**kwargs) # Workflow executions. def get_workflow_execution(id): return IMPL.get_workflow_execution(id) def load_workflow_execution(name): """Unlike get_workflow_execution this method is allowed to return None.""" return IMPL.load_workflow_execution(name) def get_workflow_executions(limit=None, marker=None, sort_keys=None, sort_dirs=None, **kwargs): return IMPL.get_workflow_executions( limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs, **kwargs ) def create_workflow_execution(values): return IMPL.create_workflow_execution(values) def update_workflow_execution(id, values): return IMPL.update_workflow_execution(id, values) def create_or_update_workflow_execution(id, values): return IMPL.create_or_update_workflow_execution(id, values) def delete_workflow_execution(id): return IMPL.delete_workflow_execution(id) def delete_workflow_executions(**kwargs): IMPL.delete_workflow_executions(**kwargs) def update_workflow_execution_state(**kwargs): return IMPL.update_workflow_execution_state(**kwargs) # Tasks executions. def get_task_execution(id): return IMPL.get_task_execution(id) def load_task_execution(id): """Unlike get_task_execution this method is allowed to return None.""" return IMPL.load_task_execution(id) def get_task_executions(limit=None, marker=None, sort_keys=None, sort_dirs=None, **kwargs): return IMPL.get_task_executions( limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs, **kwargs ) def get_completed_task_executions(**kwargs): return IMPL.get_completed_task_executions(**kwargs) def get_incomplete_task_executions(**kwargs): return IMPL.get_incomplete_task_executions(**kwargs) def get_incomplete_task_executions_count(**kwargs): return IMPL.get_incomplete_task_executions_count(**kwargs) def create_task_execution(values): return IMPL.create_task_execution(values) def update_task_execution(id, values): return IMPL.update_task_execution(id, values) def create_or_update_task_execution(id, values): return IMPL.create_or_update_task_execution(id, values) def delete_task_execution(id): return IMPL.delete_task_execution(id) def delete_task_executions(**kwargs): return IMPL.delete_task_executions(**kwargs) def update_task_execution_state(**kwargs): return IMPL.update_task_execution_state(**kwargs) # Delayed calls. def get_delayed_calls_to_start(time, batch_size=None): return IMPL.get_delayed_calls_to_start(time, batch_size) def create_delayed_call(values): return IMPL.create_delayed_call(values) def delete_delayed_call(id): return IMPL.delete_delayed_call(id) def update_delayed_call(id, values, query_filter=None): return IMPL.update_delayed_call(id, values, query_filter) def get_delayed_call(id): return IMPL.get_delayed_call(id) def get_delayed_calls(**kwargs): return IMPL.get_delayed_calls(**kwargs) def delete_delayed_calls(**kwargs): return IMPL.delete_delayed_calls(**kwargs) # Cron triggers. def get_cron_trigger(identifier): return IMPL.get_cron_trigger(identifier) def get_cron_trigger_by_id(id): return IMPL.get_cron_trigger_by_id(id) def load_cron_trigger(identifier): """Unlike get_cron_trigger this method is allowed to return None.""" return IMPL.load_cron_trigger(identifier) def get_cron_triggers(**kwargs): return IMPL.get_cron_triggers(**kwargs) def get_next_cron_triggers(time): return IMPL.get_next_cron_triggers(time) def get_expired_executions(expiration_time, limit=None, columns=(), session=None): return IMPL.get_expired_executions( expiration_time, limit, columns ) def get_superfluous_executions(max_finished_executions, limit=None, columns=(), session=None): return IMPL.get_superfluous_executions( max_finished_executions, limit, columns ) def create_cron_trigger(values): return IMPL.create_cron_trigger(values) def update_cron_trigger(identifier, values, query_filter=None): return IMPL.update_cron_trigger(identifier, values, query_filter=query_filter) def create_or_update_cron_trigger(identifier, values): return IMPL.create_or_update_cron_trigger(identifier, values) def delete_cron_trigger(identifier): return IMPL.delete_cron_trigger(identifier) def delete_cron_triggers(**kwargs): return IMPL.delete_cron_triggers(**kwargs) # Environments. def get_environment(name): return IMPL.get_environment(name) def load_environment(name): """Unlike get_environment this method is allowed to return None.""" return IMPL.load_environment(name) def get_environments(limit=None, marker=None, sort_keys=None, sort_dirs=None, **kwargs): return IMPL.get_environments( limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs, **kwargs ) def create_environment(values): return IMPL.create_environment(values) def update_environment(name, values): return IMPL.update_environment(name, values) def create_or_update_environment(name, values): return IMPL.create_or_update_environment(name, values) def delete_environment(name): IMPL.delete_environment(name) def delete_environments(**kwargs): IMPL.delete_environments(**kwargs) # Resource members. def create_resource_member(values): return IMPL.create_resource_member(values) def get_resource_member(resource_id, res_type, member_id): return IMPL.get_resource_member(resource_id, res_type, member_id) def get_resource_members(resource_id, res_type): return IMPL.get_resource_members(resource_id, res_type) def update_resource_member(resource_id, res_type, member_id, values): return IMPL.update_resource_member( resource_id, res_type, member_id, values ) def delete_resource_member(resource_id, res_type, member_id): IMPL.delete_resource_member(resource_id, res_type, member_id) def delete_resource_members(**kwargs): IMPL.delete_resource_members(**kwargs) # Event triggers. def get_event_trigger(id, insecure=False): return IMPL.get_event_trigger(id, insecure) def load_event_trigger(id, insecure=False): return IMPL.load_event_trigger(id, insecure) def get_event_triggers(insecure=False, limit=None, marker=None, sort_keys=None, sort_dirs=None, fields=None, **kwargs): return IMPL.get_event_triggers( insecure=insecure, limit=limit, marker=marker, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, **kwargs ) def create_event_trigger(values): return IMPL.create_event_trigger(values) def update_event_trigger(id, values): return IMPL.update_event_trigger(id, values) def delete_event_trigger(id): return IMPL.delete_event_trigger(id) def delete_event_triggers(**kwargs): return IMPL.delete_event_triggers(**kwargs) # Locks. def create_named_lock(name): return IMPL.create_named_lock(name) def get_named_locks(limit=None, marker=None): return IMPL.get_named_locks(limit=limit, marker=marker) def delete_named_lock(lock_id): return IMPL.delete_named_lock(lock_id) @contextlib.contextmanager def named_lock(name): with IMPL.named_lock(name): yield mistral-6.0.0/mistral/db/v2/__init__.py0000666000175100017510000000000013245513261017706 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/sqlalchemy/0000775000175100017510000000000013245513604017421 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/sqlalchemy/model_base.py0000666000175100017510000001130113245513261022062 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_db.sqlalchemy import models as oslo_models import sqlalchemy as sa from sqlalchemy import event from sqlalchemy.ext import declarative from sqlalchemy.orm import attributes from mistral.services import security from mistral import utils def id_column(): return sa.Column( sa.String(36), primary_key=True, default=utils.generate_unicode_uuid ) class _MistralModelBase(oslo_models.ModelBase, oslo_models.TimestampMixin): """Base class for all Mistral SQLAlchemy DB Models.""" __table__ = None __hash__ = object.__hash__ def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) def __eq__(self, other): if type(self) is not type(other): return False for col in self.__table__.columns: # In case of single table inheritance a class attribute # corresponding to a table column may not exist so we need # to skip these attributes. if (hasattr(self, col.name) and hasattr(other, col.name) and getattr(self, col.name) != getattr(other, col.name)): return False return True def __ne__(self, other): return not self.__eq__(other) def to_dict(self): """sqlalchemy based automatic to_dict method.""" d = {col_name: col_val for col_name, col_val in self.iter_columns()} utils.datetime_to_str_in_dict(d, 'created_at') utils.datetime_to_str_in_dict(d, 'updated_at') return d def iter_column_names(self): """Returns an iterator for loaded column names. :return: A generator function for column names. """ # If a column is unloaded at this point, it is # probably deferred. We do not want to access it # here and thereby cause it to load. unloaded = attributes.instance_state(self).unloaded for col in self.__table__.columns: if col.name not in unloaded and hasattr(self, col.name): yield col.name def iter_columns(self): """Returns an iterator for loaded columns. :return: A generator function that generates tuples (column name, column value). """ for col_name in self.iter_column_names(): yield col_name, getattr(self, col_name) def get_clone(self): """Clones current object, loads all fields and returns the result.""" m = self.__class__() for col in self.__table__.columns: if hasattr(self, col.name): setattr(m, col.name, getattr(self, col.name)) setattr( m, 'created_at', utils.datetime_to_str(getattr(self, 'created_at')) ) updated_at = getattr(self, 'updated_at') # NOTE(nmakhotkin): 'updated_at' field is empty for just created # object since it has not updated yet. if updated_at: setattr(m, 'updated_at', utils.datetime_to_str(updated_at)) return m def __repr__(self): return '%s %s' % (type(self).__name__, self.to_dict().__repr__()) MistralModelBase = declarative.declarative_base(cls=_MistralModelBase) # Secure model related stuff. class MistralSecureModelBase(MistralModelBase): """Base class for all secure models.""" __abstract__ = True scope = sa.Column(sa.String(80), default='private') project_id = sa.Column(sa.String(80), default=security.get_project_id) created_at = sa.Column(sa.DateTime, default=lambda: utils.utc_now_sec()) updated_at = sa.Column(sa.DateTime, onupdate=lambda: utils.utc_now_sec()) def _set_project_id(target, value, oldvalue, initiator): return security.get_project_id() def register_secure_model_hooks(): # Make sure 'project_id' is always properly set. for sec_model_class in utils.iter_subclasses(MistralSecureModelBase): if '__abstract__' not in sec_model_class.__dict__: event.listen( sec_model_class.project_id, 'set', _set_project_id, retval=True ) mistral-6.0.0/mistral/db/sqlalchemy/sqlite_lock.py0000666000175100017510000000316213245513261022307 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from eventlet import semaphore _mutex = semaphore.Semaphore() _locks = {} def acquire_lock(obj_id, session): with _mutex: if obj_id not in _locks: _locks[obj_id] = (session, semaphore.BoundedSemaphore(1)) tup = _locks.get(obj_id) tup[1].acquire() # Make sure to update the dictionary once the lock is acquired # to adjust session ownership. _locks[obj_id] = (session, tup[1]) def release_locks(session): with _mutex: for obj_id, tup in _locks.items(): if tup[0] is session: tup[1].release() def get_locks(): return _locks def cleanup(): with _mutex: # NOTE: For the sake of simplicity we assume that we remove stale locks # after all tests because this kind of locking can only be used with # sqlite database. Supporting fully dynamically allocated (and removed) # locks is much more complex task. If this method is not called after # tests it will cause a memory leak. _locks.clear() mistral-6.0.0/mistral/db/sqlalchemy/base.py0000666000175100017510000001353413245513261020714 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_db import options from oslo_db.sqlalchemy import enginefacade import osprofiler.sqlalchemy import sqlalchemy as sa from mistral.db.sqlalchemy import sqlite_lock from mistral import exceptions as exc from mistral import utils # Note(dzimine): sqlite only works for basic testing. options.set_defaults(cfg.CONF, connection="sqlite:///mistral.sqlite") _DB_SESSION_THREAD_LOCAL_NAME = "db_sql_alchemy_session" _facade = None _sqlalchemy_create_engine_orig = sa.create_engine def _get_facade(): global _facade if not _facade: _facade = enginefacade.LegacyEngineFacade( cfg.CONF.database.connection, sqlite_fk=True, autocommit=False, **dict(cfg.CONF.database.items()) ) if cfg.CONF.profiler.enabled: if cfg.CONF.profiler.trace_sqlalchemy: osprofiler.sqlalchemy.add_tracing( sa, _facade.get_engine(), 'db' ) return _facade # Monkey-patching sqlalchemy to set the isolation_level # as this configuration is not exposed by oslo_db. def _sqlalchemy_create_engine_wrapper(*args, **kwargs): # sqlite (used for unit testing and not allowed for production) # does not support READ_COMMITTED. # Checking the drivername using the args and not the get_driver_name() # method because that method requires a session. if args[0].drivername != 'sqlite': kwargs["isolation_level"] = "READ_COMMITTED" return _sqlalchemy_create_engine_orig(*args, **kwargs) def get_engine(): # If the patch was not applied yet. if sa.create_engine != _sqlalchemy_create_engine_wrapper: # Replace the original create_engine with our wrapper. sa.create_engine = _sqlalchemy_create_engine_wrapper return _get_facade().get_engine() def _get_session(): return _get_facade().get_session() def _get_thread_local_session(): return utils.get_thread_local(_DB_SESSION_THREAD_LOCAL_NAME) def _get_or_create_thread_local_session(): ses = _get_thread_local_session() if ses: return ses, False ses = _get_session() _set_thread_local_session(ses) return ses, True def _set_thread_local_session(session): utils.set_thread_local(_DB_SESSION_THREAD_LOCAL_NAME, session) def session_aware(param_name="session"): """Decorator for methods working within db session.""" def _decorator(func): def _within_session(*args, **kw): # If 'created' flag is True it means that the transaction is # demarcated explicitly outside this module. ses, created = _get_or_create_thread_local_session() try: kw[param_name] = ses result = func(*args, **kw) if created: ses.commit() return result except Exception: if created: ses.rollback() raise finally: if created: _set_thread_local_session(None) ses.close() _within_session.__doc__ = func.__doc__ return _within_session return _decorator # Transaction management. def start_tx(): """Starts transaction. Opens new database session and starts new transaction assuming there wasn't any opened sessions within the same thread. """ if _get_thread_local_session(): raise exc.DataAccessException( "Database transaction has already been started." ) _set_thread_local_session(_get_session()) def release_locks_if_sqlite(session): if get_driver_name() == 'sqlite': sqlite_lock.release_locks(session) def commit_tx(): """Commits previously started database transaction.""" ses = _get_thread_local_session() if not ses: raise exc.DataAccessException( "Nothing to commit. Database transaction" " has not been previously started." ) ses.commit() def rollback_tx(): """Rolls back previously started database transaction.""" ses = _get_thread_local_session() if not ses: raise exc.DataAccessException( "Nothing to roll back. Database transaction has not been started." ) ses.rollback() def end_tx(): """Ends transaction. Ends current database transaction. It rolls back all uncommitted changes and closes database session. """ ses = _get_thread_local_session() if not ses: raise exc.DataAccessException( "Database transaction has not been started." ) if ses.dirty: rollback_tx() release_locks_if_sqlite(ses) ses.close() _set_thread_local_session(None) @session_aware() def get_driver_name(session=None): return session.bind.url.drivername @session_aware() def get_dialect_name(session=None): return session.bind.url.get_dialect().name @session_aware() def model_query(model, columns=(), session=None): """Query helper. :param model: Base model to query. :param columns: Optional. Which columns to be queried. """ if columns: return session.query(*columns) return session.query(model) mistral-6.0.0/mistral/db/sqlalchemy/__init__.py0000666000175100017510000000000013245513261021521 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/0000775000175100017510000000000013245513604021412 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/0000775000175100017510000000000013245513604025242 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/0000775000175100017510000000000013245513604027112 5ustar zuulzuul00000000000000././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/019_change_scheduler_schema.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/019_change_scheduler_schem0000666000175100017510000000323313245513261034072 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Change scheduler schema. Revision ID: 019 Revises: 018 Create Date: 2016-08-17 17:54:51.952949 """ # revision identifiers, used by Alembic. revision = '019' down_revision = '018' from alembic import op import sqlalchemy as sa from sqlalchemy.engine import reflection def upgrade(): inspect = reflection.Inspector.from_engine(op.get_bind()) unique_constraints = [ uc['name'] for uc in inspect.get_unique_constraints('delayed_calls_v2') ] if 'delayed_calls_v2_processing_execution_time' in unique_constraints: op.drop_index( 'delayed_calls_v2_processing_execution_time', table_name='delayed_calls_v2' ) if 'unique_key' in unique_constraints: op.drop_index('unique_key', table_name='delayed_calls_v2') op.drop_column('delayed_calls_v2', 'unique_key') op.add_column( 'delayed_calls_v2', sa.Column('key', sa.String(length=250), nullable=True) ) op.create_index( 'delayed_calls_v2_execution_time', 'delayed_calls_v2', ['execution_time'], unique=False ) ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/017_add_named_lock_table.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/017_add_named_lock_table.p0000666000175100017510000000227413245513261033743 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Add named lock table Revision ID: 017 Revises: 016 Create Date: 2016-08-17 13:06:26.616451 """ # revision identifiers, used by Alembic. revision = '017' down_revision = '016' from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'named_locks', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=250), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name') ) ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/008_increase_size_of_state_info_column.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/008_increase_size_of_state0000666000175100017510000000167313245513261034143 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Increase size of state_info column from String to Text Revision ID: 008 Revises: 007 Create Date: 2015-11-17 21:30:50.991290 """ # revision identifiers, used by Alembic. revision = '008' down_revision = '007' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column('executions_v2', 'state_info', type_=sa.Text()) ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/003_cron_trigger_constraints.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/003_cron_trigger_constrain0000666000175100017510000000231413245513261034164 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """cron_trigger_constraints Revision ID: 003 Revises: 002 Create Date: 2015-05-25 13:09:50.190136 """ # revision identifiers, used by Alembic. revision = '003' down_revision = '002' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'cron_triggers_v2', sa.Column('first_execution_time', sa.DateTime(), nullable=True) ) op.create_unique_constraint( None, 'cron_triggers_v2', [ 'workflow_input_hash', 'workflow_name', 'pattern', 'project_id', 'workflow_params_hash', 'remaining_executions', 'first_execution_time' ] ) ././@LongLink0000000000000000000000000000016500000000000011217 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/007_move_system_flag_to_base_definition.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/007_move_system_flag_to_ba0000666000175100017510000000206513245513261034136 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Move system flag to base definition Revision ID: 007 Revises: 006 Create Date: 2015-09-15 11:24:43.081824 """ # revision identifiers, used by Alembic. revision = '007' down_revision = '006' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'workbooks_v2', sa.Column('is_system', sa.Boolean(), nullable=True) ) op.add_column( 'workflow_definitions_v2', sa.Column('is_system', sa.Boolean(), nullable=True) ) ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/010_add_resource_members_v2_table.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/010_add_resource_members_v0000666000175100017510000000310213245513261034110 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """add_resource_members_v2_table Revision ID: 010 Revises: 009 Create Date: 2015-11-15 08:39:58.772417 """ # revision identifiers, used by Alembic. revision = '010' down_revision = '009' from alembic import op import sqlalchemy as sa def upgrade(): op.create_table( 'resource_members_v2', sa.Column('id', sa.String(length=36), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=False), sa.Column('member_id', sa.String(length=80), nullable=False), sa.Column('resource_id', sa.String(length=80), nullable=False), sa.Column('resource_type', sa.String(length=50), nullable=False), sa.Column('status', sa.String(length=20), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint( 'resource_id', 'resource_type', 'member_id' ) ) ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/004_add_description_for_execution.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/004_add_description_for_ex0000666000175100017510000000170313245513261034117 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """add description for execution Revision ID: 004 Revises: 003 Create Date: 2015-06-10 14:23:54.494596 """ # revision identifiers, used by Alembic. revision = '004' down_revision = '003' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'executions_v2', sa.Column('description', sa.String(length=255), nullable=True) ) ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/009_add_database_indices.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/009_add_database_indices.p0000666000175100017510000001156513245513261033746 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Add database indices Revision ID: 009 Revises: 008 Create Date: 2015-11-25 19:06:14.975474 """ # revision identifiers, used by Alembic. revision = '009' down_revision = '008' from alembic import op from sqlalchemy.engine import reflection def upgrade(): inspector = reflection.Inspector.from_engine(op.get_bind()) op.create_index( 'action_definitions_v2_action_class', 'action_definitions_v2', ['action_class'], unique=False ) op.create_index( 'action_definitions_v2_is_system', 'action_definitions_v2', ['is_system'], unique=False ) op.create_index( 'action_definitions_v2_project_id', 'action_definitions_v2', ['project_id'], unique=False ) op.create_index( 'action_definitions_v2_scope', 'action_definitions_v2', ['scope'], unique=False ) op.create_index( 'cron_triggers_v2_next_execution_time', 'cron_triggers_v2', ['next_execution_time'], unique=False ) op.create_index( 'cron_triggers_v2_project_id', 'cron_triggers_v2', ['project_id'], unique=False ) op.create_index( 'cron_triggers_v2_scope', 'cron_triggers_v2', ['scope'], unique=False ) op.create_index( 'cron_triggers_v2_workflow_name', 'cron_triggers_v2', ['workflow_name'], unique=False ) cron_v2_constrs = [uc['name'] for uc in inspector.get_unique_constraints('cron_triggers_v2')] if ('cron_triggers_v2_workflow_input_hash_workflow_name_pattern__key' in cron_v2_constrs): op.drop_constraint( 'cron_triggers_v2_workflow_input_hash_workflow_name_pattern__key', 'cron_triggers_v2', type_='unique' ) if ('cron_triggers_v2_workflow_input_hash_workflow_name_pattern_key1' in cron_v2_constrs): op.drop_constraint( 'cron_triggers_v2_workflow_input_hash_workflow_name_pattern_key1', 'cron_triggers_v2', type_='unique' ) op.create_index( 'delayed_calls_v2_processing_execution_time', 'delayed_calls_v2', ['processing', 'execution_time'], unique=False ) op.create_index( 'environments_v2_name', 'environments_v2', ['name'], unique=False ) op.create_index( 'environments_v2_project_id', 'environments_v2', ['project_id'], unique=False ) op.create_index( 'environments_v2_scope', 'environments_v2', ['scope'], unique=False ) op.create_index( 'executions_v2_project_id', 'executions_v2', ['project_id'], unique=False ) op.create_index( 'executions_v2_scope', 'executions_v2', ['scope'], unique=False ) op.create_index( 'executions_v2_state', 'executions_v2', ['state'], unique=False ) op.create_index( 'executions_v2_task_execution_id', 'executions_v2', ['task_execution_id'], unique=False ) op.create_index( 'executions_v2_type', 'executions_v2', ['type'], unique=False ) op.create_index( 'executions_v2_updated_at', 'executions_v2', ['updated_at'], unique=False ) op.create_index( 'executions_v2_workflow_execution_id', 'executions_v2', ['workflow_execution_id'], unique=False ) op.create_index( 'workbooks_v2_project_id', 'workbooks_v2', ['project_id'], unique=False ) op.create_index( 'workbooks_v2_scope', 'workbooks_v2', ['scope'], unique=False ) op.create_index( 'workflow_definitions_v2_is_system', 'workflow_definitions_v2', ['is_system'], unique=False ) op.create_index( 'workflow_definitions_v2_project_id', 'workflow_definitions_v2', ['project_id'], unique=False ) op.create_index( 'workflow_definitions_v2_scope', 'workflow_definitions_v2', ['scope'], unique=False ) ././@LongLink0000000000000000000000000000020000000000000011205 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/024_add_composite_index_workflow_execution_id_name.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/024_add_composite_index_wo0000666000175100017510000000171713245513261034137 0ustar zuulzuul00000000000000# Copyright 2017 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Add database indices Revision ID: 024 Revises: 023 Create Date: 2017-10-11 15:23:04.904251 """ # revision identifiers, used by Alembic. revision = '024' down_revision = '023' from alembic import op def upgrade(): op.create_index('task_executions_v2_workflow_execution_id_name', 'task_executions_v2', ['workflow_execution_id', 'name']) ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/021_increase_env_columns_size.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/021_increase_env_columns_s0000666000175100017510000000206413245513261034145 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Increase environments_v2 column size from JsonDictType to JsonLongDictType Revision ID: 021 Revises: 020 Create Date: 2017-06-13 13:29:41.636094 """ # revision identifiers, used by Alembic. revision = '021' down_revision = '020' from alembic import op from mistral.db.sqlalchemy import types as st def upgrade(): # Changing column types from JsonDictType to JsonLongDictType op.alter_column('environments_v2', 'variables', type_=st.JsonLongDictType()) ././@LongLink0000000000000000000000000000016200000000000011214 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/016_increase_size_of_task_unique_key.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/016_increase_size_of_task_0000666000175100017510000000165113245513261034117 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Increase size of task_executions_v2.unique_key Revision ID: 016 Revises: 015 Create Date: 2016-08-11 15:57:23.241734 """ # revision identifiers, used by Alembic. revision = '016' down_revision = '015' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column('task_executions_v2', 'unique_key', type_=sa.String(200)) ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/012_add_event_triggers_v2_table.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/012_add_event_triggers_v2_0000666000175100017510000000431313245513261034026 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """add event triggers table Revision ID: 012 Revises: 011 Create Date: 2016-03-04 09:49:52.481791 """ # revision identifiers, used by Alembic. revision = '012' down_revision = '011' from alembic import op import sqlalchemy as sa from mistral.db.sqlalchemy import types as st def upgrade(): op.create_table( 'event_triggers_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=200), nullable=True), sa.Column('workflow_id', sa.String(length=36), nullable=False), sa.Column('exchange', sa.String(length=80), nullable=False), sa.Column('topic', sa.String(length=80), nullable=False), sa.Column('event', sa.String(length=80), nullable=False), sa.Column('workflow_params', st.JsonEncoded(), nullable=True), sa.Column('workflow_input', st.JsonEncoded(), nullable=True), sa.Column('trust_id', sa.String(length=80), nullable=True), sa.ForeignKeyConstraint( ['workflow_id'], [u'workflow_definitions_v2.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint( 'exchange', 'topic', 'event', 'workflow_id', 'project_id' ), sa.Index( 'event_triggers_v2_project_id_workflow_id', 'project_id', 'workflow_id' ) ) mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/022_namespace_support.py0000666000175100017510000000712613245513261033606 0ustar zuulzuul00000000000000# Copyright 2017 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """namespace_support Revision ID: 022 Revises: 021 Create Date: 2017-06-11 13:09:06.782095 """ # revision identifiers, used by Alembic. revision = '022' down_revision = '021' from alembic import op import sqlalchemy as sa from sqlalchemy.engine import reflection from sqlalchemy.sql import table, column # A simple model of the workflow definitions table with only the field needed wf_def = table('workflow_definitions_v2', column('namespace')) # A simple model of the workflow executions table with only the field needed wf_exec = table('workflow_executions_v2', column('workflow_namespace')) # A simple model of the task executions table with only the field needed task_exec = table('task_executions_v2', column('workflow_namespace')) # A simple model of the action executions table with only the fields needed action_executions = sa.Table( 'action_executions_v2', sa.MetaData(), sa.Column('id', sa.String(36), nullable=False), sa.Column('workflow_name', sa.String(255)), sa.Column('workflow_namespace', sa.String(255), nullable=True) ) def upgrade(): # ### commands auto generated by Alembic - please adjust! ### op.add_column( 'workflow_definitions_v2', sa.Column( 'namespace', sa.String(length=255), nullable=True ) ) inspect = reflection.Inspector.from_engine(op.get_bind()) unique_constraints = [ unique_constraint['name'] for unique_constraint in inspect.get_unique_constraints('workflow_definitions_v2') ] if 'name' in unique_constraints: op.drop_index('name', table_name='workflow_definitions_v2') op.create_unique_constraint( None, 'workflow_definitions_v2', ['name', 'namespace', 'project_id'] ) op.add_column( 'workflow_executions_v2', sa.Column( 'workflow_namespace', sa.String(length=255), nullable=True ) ) op.add_column( 'task_executions_v2', sa.Column( 'workflow_namespace', sa.String(length=255), nullable=True ) ) op.add_column( 'action_executions_v2', sa.Column('workflow_namespace', sa.String(length=255), nullable=True) ) session = sa.orm.Session(bind=op.get_bind()) values = [] for row in session.query(action_executions): values.append({'id': row[0], 'workflow_name': row[1]}) with session.begin(subtransactions=True): session.execute(wf_def.update().values(namespace='')) session.execute(wf_exec.update().values(workflow_namespace='')) session.execute(task_exec.update().values(workflow_namespace='')) for value in values: if value['workflow_name']: session.execute(action_executions.update().values( workflow_namespace='' ).where(action_executions.c.id == value['id'])) # this commit appears to be necessary session.commit() # ### end Alembic commands ### ././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/020_add_type_to_task_execution.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/020_add_type_to_task_execu0000666000175100017510000000370113245513261034126 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """add type to task execution Revision ID: 020 Revises: 019 Create Date: 2016-10-05 13:24:52.911011 """ # revision identifiers, used by Alembic. revision = '020' down_revision = '019' from alembic import op from mistral.db.sqlalchemy import types as st import sqlalchemy as sa # A simple model of the task executions table with only the fields needed for # the migration. task_executions = sa.Table( 'task_executions_v2', sa.MetaData(), sa.Column('id', sa.String(36), nullable=False), sa.Column( 'spec', st.JsonMediumDictType() ), sa.Column('type', sa.String(10), nullable=True) ) def upgrade(): op.add_column( 'task_executions_v2', sa.Column('type', sa.String(length=10), nullable=True) ) session = sa.orm.Session(bind=op.get_bind()) values = [] for row in session.query(task_executions): values.append({'id': row[0], 'spec': row[1]}) with session.begin(subtransactions=True): for value in values: task_type = "ACTION" if "workflow" in value['spec']: task_type = "WORKFLOW" session.execute( task_executions.update().values(type=task_type).where( task_executions.c.id == value['id'] ) ) # this commit appears to be necessary session.commit() ././@LongLink0000000000000000000000000000017100000000000011214 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/018_increate_task_execution_unique_key_size.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/018_increate_task_executio0000666000175100017510000000164213245513261034152 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """increate_task_execution_unique_key_size Revision ID: 018 Revises: 017 Create Date: 2016-08-17 17:47:30.325182 """ # revision identifiers, used by Alembic. revision = '018' down_revision = '017' from alembic import op import sqlalchemy as sa def upgrade(): op.alter_column('task_executions_v2', 'unique_key', type_=sa.String(250)) ././@LongLink0000000000000000000000000000016000000000000011212 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/014_fix_past_scripts_discrepancies.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/014_fix_past_scripts_discr0000666000175100017510000000462213245513261034176 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """fix_past_scripts_discrepancies Revision ID: 014 Revises: 013 Create Date: 2016-08-07 13:12:34.958845 """ # revision identifiers, used by Alembic. revision = '014' down_revision = '013' from alembic import op from sqlalchemy.dialects import mysql from sqlalchemy.engine import reflection def upgrade(): inspect = reflection.Inspector.from_engine(op.get_bind()) ct_unique_constraints = [ uc['name'] for uc in inspect.get_unique_constraints('cron_triggers_v2') ] # unique constraint was added in 001, 002 and 003 with slight variations # without deleting the previous ones. # here we try to delete all three in case they exist if 'workflow_input_hash' in ct_unique_constraints: op.drop_index('workflow_input_hash', table_name='cron_triggers_v2') if 'workflow_input_hash_2' in ct_unique_constraints: op.drop_index('workflow_input_hash_2', table_name='cron_triggers_v2') if 'workflow_input_hash_3' in ct_unique_constraints: op.drop_index('workflow_input_hash_3', table_name='cron_triggers_v2') # create the correct latest unique constraint for table cron_triggers_v2 op.create_unique_constraint( None, 'cron_triggers_v2', [ 'workflow_input_hash', 'workflow_name', 'pattern', 'project_id', 'workflow_params_hash', 'remaining_executions', 'first_execution_time' ] ) # column was added in 012. nullable value does not match today's model. op.alter_column( 'event_triggers_v2', 'workflow_id', existing_type=mysql.VARCHAR(length=36), nullable=True ) # column was added in 010. nullable value does not match today's model op.alter_column( 'resource_members_v2', 'project_id', existing_type=mysql.VARCHAR(length=80), nullable=True ) mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/__init__.py0000666000175100017510000000000013245513261031212 0ustar zuulzuul00000000000000././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/006_add_processed_to_delayed_calls_v2.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/006_add_processed_to_delay0000666000175100017510000000175413245513261034111 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """add a Boolean column 'processed' to the table delayed_calls_v2 Revision ID: 006 Revises: 005 Create Date: 2015-08-09 09:44:38.289271 """ # revision identifiers, used by Alembic. revision = '006' down_revision = '005' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'delayed_calls_v2', sa.Column('processing', sa.Boolean, default=False, nullable=False) ) ././@LongLink0000000000000000000000000000016600000000000011220 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/013_split_execution_table_increase_names.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/013_split_execution_table_0000666000175100017510000001757713245513261034166 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """split_execution_table_increase_names Revision ID: 013 Revises: 012 Create Date: 2016-08-02 11:03:03.263944 """ # revision identifiers, used by Alembic. from mistral.db.sqlalchemy import types as st from alembic import op import sqlalchemy as sa revision = '013' down_revision = '012' def upgrade(): op.create_table( 'action_executions_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('description', sa.String(length=255), nullable=True), sa.Column('workflow_name', sa.String(length=255), nullable=True), sa.Column('workflow_id', sa.String(length=80), nullable=True), sa.Column('spec', st.JsonMediumDictType(), nullable=True), sa.Column('state', sa.String(length=20), nullable=True), sa.Column('state_info', sa.TEXT(), nullable=True), sa.Column('tags', st.JsonListType(), nullable=True), sa.Column('runtime_context', st.JsonLongDictType(), nullable=True), sa.Column('accepted', sa.Boolean(), nullable=True), sa.Column('input', st.JsonLongDictType(), nullable=True), sa.Column('output', st.JsonLongDictType(), nullable=True), sa.Column('task_execution_id', sa.String(length=36), nullable=True), sa.PrimaryKeyConstraint('id'), sa.Index( 'action_executions_v2_project_id', 'project_id' ), sa.Index( 'action_executions_v2_scope', 'scope' ), sa.Index( 'action_executions_v2_state', 'state' ), sa.Index( 'action_executions_v2_updated_at', 'updated_at' ), ) op.create_table( 'workflow_executions_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('description', sa.String(length=255), nullable=True), sa.Column('workflow_name', sa.String(length=255), nullable=True), sa.Column('workflow_id', sa.String(length=80), nullable=True), sa.Column('spec', st.JsonMediumDictType(), nullable=True), sa.Column('state', sa.String(length=20), nullable=True), sa.Column('state_info', sa.TEXT(), nullable=True), sa.Column('tags', st.JsonListType(), nullable=True), sa.Column('runtime_context', st.JsonLongDictType(), nullable=True), sa.Column('accepted', sa.Boolean(), nullable=True), sa.Column('input', st.JsonLongDictType(), nullable=True), sa.Column('output', st.JsonLongDictType(), nullable=True), sa.Column('params', st.JsonLongDictType(), nullable=True), sa.Column('context', st.JsonLongDictType(), nullable=True), sa.Column('task_execution_id', sa.String(length=36), nullable=True), sa.PrimaryKeyConstraint('id'), sa.Index( 'workflow_executions_v2_project_id', 'project_id' ), sa.Index( 'workflow_executions_v2_scope', 'scope' ), sa.Index( 'workflow_executions_v2_state', 'state' ), sa.Index( 'workflow_executions_v2_updated_at', 'updated_at' ), ) op.create_table( 'task_executions_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=255), nullable=True), sa.Column('description', sa.String(length=255), nullable=True), sa.Column('workflow_name', sa.String(length=255), nullable=True), sa.Column('workflow_id', sa.String(length=80), nullable=True), sa.Column('spec', st.JsonMediumDictType(), nullable=True), sa.Column('state', sa.String(length=20), nullable=True), sa.Column('state_info', sa.TEXT(), nullable=True), sa.Column('tags', st.JsonListType(), nullable=True), sa.Column('runtime_context', st.JsonLongDictType(), nullable=True), sa.Column('action_spec', st.JsonLongDictType(), nullable=True), sa.Column('processed', sa.Boolean(), nullable=True), sa.Column('in_context', st.JsonLongDictType(), nullable=True), sa.Column('published', st.JsonLongDictType(), nullable=True), sa.Column( 'workflow_execution_id', sa.String(length=36), nullable=True ), sa.PrimaryKeyConstraint('id'), sa.Index( 'task_executions_v2_project_id', 'project_id' ), sa.Index( 'task_executions_v2_scope', 'scope' ), sa.Index( 'task_executions_v2_state', 'state' ), sa.Index( 'task_executions_v2_updated_at', 'updated_at' ), sa.Index( 'task_executions_v2_workflow_execution_id', 'workflow_execution_id' ), sa.ForeignKeyConstraint( ['workflow_execution_id'], [u'workflow_executions_v2.id'], ondelete='CASCADE' ), ) # 2 foreign keys are added here because all 3 tables are dependent. op.create_foreign_key( None, 'action_executions_v2', 'task_executions_v2', ['task_execution_id'], ['id'], ondelete='CASCADE' ) op.create_foreign_key( None, 'workflow_executions_v2', 'task_executions_v2', ['task_execution_id'], ['id'], ondelete='CASCADE' ) op.alter_column( 'workbooks_v2', 'name', type_=sa.String(length=255) ) op.alter_column( 'workbooks_v2', 'definition', type_=st.MediumText() ) op.alter_column( 'workbooks_v2', 'spec', type_=st.JsonMediumDictType() ) op.alter_column( 'workflow_definitions_v2', 'name', type_=sa.String(length=255) ) op.alter_column( 'workflow_definitions_v2', 'definition', type_=st.MediumText() ) op.alter_column( 'workflow_definitions_v2', 'spec', type_=st.JsonMediumDictType() ) op.alter_column( 'action_definitions_v2', 'name', type_=sa.String(length=255) ) op.alter_column( 'action_definitions_v2', 'definition', type_=st.MediumText() ) op.alter_column( 'action_definitions_v2', 'spec', type_=st.JsonMediumDictType() ) op.alter_column( 'cron_triggers_v2', 'workflow_name', type_=sa.String(length=255) ) mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/001_kilo.py0000666000175100017510000002541113245513261031006 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Kilo release Revision ID: 001 Revises: None Create Date: 2015-03-31 12:02:51.935368 """ # revision identifiers, used by Alembic. revision = '001' down_revision = None from alembic import op import sqlalchemy as sa from mistral.db.sqlalchemy import types as st def upgrade(): op.create_table( 'workbooks_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=80), nullable=True), sa.Column('definition', sa.Text(), nullable=True), sa.Column('spec', st.JsonEncoded(), nullable=True), sa.Column('tags', st.JsonEncoded(), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', 'project_id') ) op.create_table( 'tasks', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=80), nullable=True), sa.Column('requires', st.JsonEncoded(), nullable=True), sa.Column('workbook_name', sa.String(length=80), nullable=True), sa.Column('execution_id', sa.String(length=36), nullable=True), sa.Column('description', sa.String(length=200), nullable=True), sa.Column('task_spec', st.JsonEncoded(), nullable=True), sa.Column('action_spec', st.JsonEncoded(), nullable=True), sa.Column('state', sa.String(length=20), nullable=True), sa.Column('tags', st.JsonEncoded(), nullable=True), sa.Column('in_context', st.JsonEncoded(), nullable=True), sa.Column('parameters', st.JsonEncoded(), nullable=True), sa.Column('output', st.JsonEncoded(), nullable=True), sa.Column('task_runtime_context', st.JsonEncoded(), nullable=True), sa.PrimaryKeyConstraint('id') ) op.create_table( 'action_definitions_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=80), nullable=True), sa.Column('definition', sa.Text(), nullable=True), sa.Column('spec', st.JsonEncoded(), nullable=True), sa.Column('tags', st.JsonEncoded(), nullable=True), sa.Column('description', sa.Text(), nullable=True), sa.Column('input', sa.Text(), nullable=True), sa.Column('action_class', sa.String(length=200), nullable=True), sa.Column('attributes', st.JsonEncoded(), nullable=True), sa.Column('is_system', sa.Boolean(), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', 'project_id') ) op.create_table( 'workflow_definitions_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=80), nullable=True), sa.Column('definition', sa.Text(), nullable=True), sa.Column('spec', st.JsonEncoded(), nullable=True), sa.Column('tags', st.JsonEncoded(), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', 'project_id') ) op.create_table( 'executions_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('type', sa.String(length=50), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=80), nullable=True), sa.Column('workflow_name', sa.String(length=80), nullable=True), sa.Column('spec', st.JsonEncoded(), nullable=True), sa.Column('state', sa.String(length=20), nullable=True), sa.Column('state_info', sa.String(length=1024), nullable=True), sa.Column('tags', st.JsonEncoded(), nullable=True), sa.Column('accepted', sa.Boolean(), nullable=True), sa.Column('input', st.JsonEncoded(), nullable=True), sa.Column('output', st.JsonLongDictType(), nullable=True), sa.Column('params', st.JsonEncoded(), nullable=True), sa.Column('context', st.JsonEncoded(), nullable=True), sa.Column('action_spec', st.JsonEncoded(), nullable=True), sa.Column('processed', sa.BOOLEAN(), nullable=True), sa.Column('in_context', st.JsonLongDictType(), nullable=True), sa.Column('published', st.JsonEncoded(), nullable=True), sa.Column('runtime_context', st.JsonEncoded(), nullable=True), sa.Column('task_execution_id', sa.String(length=36), nullable=True), sa.Column( 'workflow_execution_id', sa.String(length=36), nullable=True ), sa.ForeignKeyConstraint( ['task_execution_id'], [u'executions_v2.id'], ), sa.ForeignKeyConstraint( ['workflow_execution_id'], [u'executions_v2.id'], ), sa.PrimaryKeyConstraint('id') ) op.create_table( 'workbooks', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=80), nullable=False), sa.Column('definition', sa.Text(), nullable=True), sa.Column('description', sa.String(length=200), nullable=True), sa.Column('tags', st.JsonEncoded(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('trust_id', sa.String(length=80), nullable=True), sa.PrimaryKeyConstraint('id', 'name'), sa.UniqueConstraint('name') ) op.create_table( 'environments_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=200), nullable=True), sa.Column('description', sa.Text(), nullable=True), sa.Column('variables', st.JsonEncoded(), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', 'project_id') ) op.create_table( 'triggers', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=80), nullable=False), sa.Column('pattern', sa.String(length=20), nullable=False), sa.Column('next_execution_time', sa.DateTime(), nullable=False), sa.Column('workbook_name', sa.String(length=80), nullable=False), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name') ) op.create_table( 'delayed_calls_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column( 'factory_method_path', sa.String(length=200), nullable=True ), sa.Column('target_method_name', sa.String(length=80), nullable=False), sa.Column('method_arguments', st.JsonEncoded(), nullable=True), sa.Column('serializers', st.JsonEncoded(), nullable=True), sa.Column('auth_context', st.JsonEncoded(), nullable=True), sa.Column('execution_time', sa.DateTime(), nullable=False), sa.PrimaryKeyConstraint('id') ) op.create_table( 'workflow_executions', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('workbook_name', sa.String(length=80), nullable=True), sa.Column('task', sa.String(length=80), nullable=True), sa.Column('state', sa.String(length=20), nullable=True), sa.Column('context', st.JsonEncoded(), nullable=True), sa.PrimaryKeyConstraint('id') ) op.create_table( 'cron_triggers_v2', sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('updated_at', sa.DateTime(), nullable=True), sa.Column('scope', sa.String(length=80), nullable=True), sa.Column('project_id', sa.String(length=80), nullable=True), sa.Column('id', sa.String(length=36), nullable=False), sa.Column('name', sa.String(length=200), nullable=True), sa.Column('pattern', sa.String(length=100), nullable=True), sa.Column('next_execution_time', sa.DateTime(), nullable=False), sa.Column('workflow_name', sa.String(length=80), nullable=True), sa.Column('remaining_executions', sa.Integer(), nullable=True), sa.Column('workflow_id', sa.String(length=36), nullable=True), sa.Column('workflow_input', st.JsonEncoded(), nullable=True), sa.Column('workflow_input_hash', sa.CHAR(length=64), nullable=True), sa.Column('trust_id', sa.String(length=80), nullable=True), sa.ForeignKeyConstraint( ['workflow_id'], [u'workflow_definitions_v2.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name', 'project_id'), sa.UniqueConstraint( 'workflow_input_hash', 'workflow_name', 'pattern', 'project_id' ) ) ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/005_increase_execution_columns_size.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/005_increase_execution_col0000666000175100017510000000303613245513261034135 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Increase executions_v2 column size from JsonDictType to JsonLongDictType Revision ID: 005 Revises: 004 Create Date: 2015-07-21 08:48:51.636094 """ # revision identifiers, used by Alembic. revision = '005' down_revision = '004' from alembic import op from mistral.db.sqlalchemy import types as st def upgrade(): # Changing column types from JsonDictType to JsonLongDictType op.alter_column('executions_v2', 'runtime_context', type_=st.JsonLongDictType()) op.alter_column('executions_v2', 'input', type_=st.JsonLongDictType()) op.alter_column('executions_v2', 'params', type_=st.JsonLongDictType()) op.alter_column('executions_v2', 'context', type_=st.JsonLongDictType()) op.alter_column('executions_v2', 'action_spec', type_=st.JsonLongDictType()) op.alter_column('executions_v2', 'published', type_=st.JsonLongDictType()) mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/002_kilo.py0000666000175100017510000000263213245513261031007 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Kilo Revision ID: 002 Revises: 001 Create Date: 2015-04-30 16:15:34.737030 """ # revision identifiers, used by Alembic. revision = '002' down_revision = '001' from alembic import op import sqlalchemy as sa from mistral.db.sqlalchemy import types as st def upgrade(): op.drop_table('tasks') op.drop_table('workflow_executions') op.drop_table('workbooks') op.drop_table('triggers') op.add_column( 'cron_triggers_v2', sa.Column('workflow_params', st.JsonEncoded(), nullable=True) ) op.add_column( 'cron_triggers_v2', sa.Column('workflow_params_hash', sa.CHAR(length=64), nullable=True) ) op.create_unique_constraint( None, 'cron_triggers_v2', ['workflow_input_hash', 'workflow_name', 'pattern', 'project_id', 'workflow_params_hash'] ) ././@LongLink0000000000000000000000000000014700000000000011217 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/023_add_root_execution_id.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/023_add_root_execution_id.0000666000175100017510000000175513245513261034042 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Add the root execution ID to the workflow execution model Revision ID: 023 Revises: 022 Create Date: 2017-07-26 14:51:02.384729 """ # revision identifiers, used by Alembic. revision = '023' down_revision = '022' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'workflow_executions_v2', sa.Column('root_execution_id', sa.String(length=80), nullable=True) ) ././@LongLink0000000000000000000000000000016700000000000011221 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/015_add_unique_keys_for_non_locking_model.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/015_add_unique_keys_for_no0000666000175100017510000000245113245513261034140 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """add_unique_keys_for_non_locking_model Revision ID: 015 Revises: 014 Create Date: 2016-08-08 11:05:20.109380 """ # revision identifiers, used by Alembic. revision = '015' down_revision = '014' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'delayed_calls_v2', sa.Column('unique_key', sa.String(length=80), nullable=True) ) op.create_unique_constraint( None, 'delayed_calls_v2', ['unique_key', 'processing'] ) op.add_column( 'task_executions_v2', sa.Column('unique_key', sa.String(length=80), nullable=True) ) op.create_unique_constraint( None, 'task_executions_v2', ['unique_key'] ) ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/011_add_workflow_id_for_execution.pymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/versions/011_add_workflow_id_for_ex0000666000175100017510000000170213245513261034117 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """add workflow id for execution Revision ID: 011 Revises: 010 Create Date: 2016-02-02 22:29:34.672735 """ # revision identifiers, used by Alembic. revision = '011' down_revision = '010' from alembic import op import sqlalchemy as sa def upgrade(): op.add_column( 'executions_v2', sa.Column('workflow_id', sa.String(length=80), nullable=True) ) mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/env.py0000666000175100017510000000462313245513261026412 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import with_statement from alembic import context from logging import config as c from oslo_utils import importutils from sqlalchemy import create_engine from sqlalchemy import pool from mistral.db.sqlalchemy import model_base importutils.try_import('mistral.db.v2.sqlalchemy.models') # This is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config mistral_config = config.mistral_config # Interpret the config file for Python logging. # This line sets up loggers basically. c.fileConfig(config.config_file_name) # Add your model's MetaData object here for 'autogenerate' support. target_metadata = model_base.MistralSecureModelBase.metadata def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ context.configure(url=mistral_config.database.connection) with context.begin_transaction(): context.run_migrations() def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ engine = create_engine( mistral_config.database.connection, poolclass=pool.NullPool ) connection = engine.connect() context.configure( connection=connection, target_metadata=target_metadata ) try: with context.begin_transaction(): context.run_migrations() finally: connection.close() if context.is_offline_mode(): run_migrations_offline() else: run_migrations_online() mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/README.md0000666000175100017510000000414613245513261026527 0ustar zuulzuul00000000000000The migrations in `alembic_migrations/versions` contain the changes needed to migrate between Mistral database revisions. A migration occurs by executing a script that details the changes needed to upgrade the database. The migration scripts are ordered so that multiple scripts can run sequentially. The scripts are executed by Mistral's migration wrapper which uses the Alembic library to manage the migration. Mistral supports migration from Kilo or later. You can upgrade to the latest database version via: ``` mistral-db-manage --config-file /path/to/mistral.conf upgrade head ``` You can populate the database with standard actions and workflows: ``` mistral-db-manage --config-file /path/to/mistral.conf populate ``` To check the current database version: ``` mistral-db-manage --config-file /path/to/mistral.conf current ``` To create a script to run the migration offline: ``` mistral-db-manage --config-file /path/to/mistral.conf upgrade head --sql ``` To run the offline migration between specific migration versions: ``` mistral-db-manage --config-file /path/to/mistral.conf upgrade : --sql ``` Upgrade the database incrementally: ``` mistral-db-manage --config-file /path/to/mistral.conf upgrade --delta <# of revs> ``` Or, upgrade the database to one newer revision: ``` mistral-db-manage --config-file /path/to/mistral.conf upgrade +1 ``` Create new revision: ``` mistral-db-manage --config-file /path/to/mistral.conf revision -m "description of revision" --autogenerate ``` Create a blank file: ``` mistral-db-manage --config-file /path/to/mistral.conf revision -m "description of revision" ``` This command does not perform any migrations, it only sets the revision. Revision may be any existing revision. Use this command carefully. ``` mistral-db-manage --config-file /path/to/mistral.conf stamp ``` To verify that the timeline does branch, you can run this command: ``` mistral-db-manage --config-file /path/to/mistral.conf check_migration ``` If the migration path has branch, you can find the branch point via: ``` mistral-db-manage --config-file /path/to/mistral.conf historymistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/__init__.py0000666000175100017510000000000013245513261027342 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic_migrations/script.py.mako0000666000175100017510000000166713245513261030061 0ustar zuulzuul00000000000000# Copyright ${create_date.year} OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """${message} Revision ID: ${up_revision} Revises: ${down_revision} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"}mistral-6.0.0/mistral/db/sqlalchemy/migration/cli.py0000666000175100017510000000773013245513261022543 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Starter script for mistral-db-manage.""" import os from alembic import command as alembic_cmd from alembic import config as alembic_cfg from alembic import util as alembic_u from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils import six import sys from mistral import config from mistral.services import action_manager from mistral.services import workflows # We need to import mistral.api.app to # make sure we register all needed options. importutils.try_import('mistral.api.app') CONF = cfg.CONF LOG = logging.getLogger(__name__) def do_alembic_command(config, cmd, *args, **kwargs): try: getattr(alembic_cmd, cmd)(config, *args, **kwargs) except alembic_u.CommandError as e: alembic_u.err(six.text_type(e)) def do_check_migration(config, _cmd): do_alembic_command(config, 'branches') def do_upgrade(config, cmd): if not CONF.command.revision and not CONF.command.delta: raise SystemExit('You must provide a revision or relative delta') revision = CONF.command.revision if CONF.command.delta: sign = '+' if CONF.command.name == 'upgrade' else '-' revision = sign + str(CONF.command.delta) do_alembic_command(config, cmd, revision, sql=CONF.command.sql) def do_stamp(config, cmd): do_alembic_command( config, cmd, CONF.command.revision, sql=CONF.command.sql ) def do_populate(config, cmd): LOG.info("Populating db") action_manager.sync_db() workflows.sync_db() def do_revision(config, cmd): do_alembic_command( config, cmd, message=CONF.command.message, autogenerate=CONF.command.autogenerate, sql=CONF.command.sql ) def add_command_parsers(subparsers): for name in ['current', 'history', 'branches']: parser = subparsers.add_parser(name) parser.set_defaults(func=do_alembic_command) parser = subparsers.add_parser('upgrade') parser.add_argument('--delta', type=int) parser.add_argument('--sql', action='store_true') parser.add_argument('revision', nargs='?') parser.set_defaults(func=do_upgrade) parser = subparsers.add_parser('populate') parser.set_defaults(func=do_populate) parser = subparsers.add_parser('stamp') parser.add_argument('--sql', action='store_true') parser.add_argument('revision', nargs='?') parser.set_defaults(func=do_stamp) parser = subparsers.add_parser('revision') parser.add_argument('-m', '--message') parser.add_argument('--autogenerate', action='store_true') parser.add_argument('--sql', action='store_true') parser.set_defaults(func=do_revision) command_opt = cfg.SubCommandOpt('command', title='Command', help='Available commands', handler=add_command_parsers) CONF.register_cli_opt(command_opt) CONF.register_cli_opt(config.os_actions_mapping_path) def main(): config = alembic_cfg.Config( os.path.join(os.path.dirname(__file__), 'alembic.ini') ) config.set_main_option( 'script_location', 'mistral.db.sqlalchemy.migration:alembic_migrations' ) # attach the Mistral conf to the Alembic conf config.mistral_config = CONF logging.register_options(CONF) CONF(project='mistral') logging.setup(CONF, 'Mistral') CONF.command.func(config, CONF.command.name) if __name__ == '__main__': sys.exit(main()) mistral-6.0.0/mistral/db/sqlalchemy/migration/alembic.ini0000666000175100017510000000214213245513261023507 0ustar zuulzuul00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = mistral/db/sqlalchemy/migration/alembic_migrations # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # max length of characters to apply to the # "slug" field #truncate_slug_length = 40 # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # set to 'true' to allow .pyc and .pyo files without # a source .py file to be detected as revisions in the # versions/ directory # sourceless = false sqlalchemy.url = # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%Smistral-6.0.0/mistral/db/sqlalchemy/migration/__init__.py0000666000175100017510000000000013245513261023512 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/sqlalchemy/types.py0000666000175100017510000000611113245513261021137 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This module implements SQLAlchemy-based types for dict and list # expressed by json-strings # from oslo_serialization import jsonutils import sqlalchemy as sa from sqlalchemy.dialects import mysql from sqlalchemy.ext import mutable class JsonEncoded(sa.TypeDecorator): """Represents an immutable structure as a json-encoded string.""" impl = sa.Text def process_bind_param(self, value, dialect): if value is not None: value = jsonutils.dumps(value) return value def process_result_value(self, value, dialect): if value is not None: value = jsonutils.loads(value) return value class MutableList(mutable.Mutable, list): @classmethod def coerce(cls, key, value): """Convert plain lists to MutableList.""" if not isinstance(value, MutableList): if isinstance(value, list): return MutableList(value) # this call will raise ValueError return mutable.Mutable.coerce(key, value) return value def __add__(self, value): """Detect list add events and emit change events.""" list.__add__(self, value) self.changed() def append(self, value): """Detect list add events and emit change events.""" list.append(self, value) self.changed() def __setitem__(self, key, value): """Detect list set events and emit change events.""" list.__setitem__(self, key, value) self.changed() def __delitem__(self, i): """Detect list del events and emit change events.""" list.__delitem__(self, i) self.changed() def JsonDictType(): """Returns an SQLAlchemy Column Type suitable to store a Json dict.""" return mutable.MutableDict.as_mutable(JsonEncoded) def JsonListType(): """Returns an SQLAlchemy Column Type suitable to store a Json array.""" return MutableList.as_mutable(JsonEncoded) def MediumText(): # TODO(rakhmerov): Need to do for postgres. return sa.Text().with_variant(mysql.MEDIUMTEXT(), 'mysql') class JsonEncodedMediumText(JsonEncoded): impl = MediumText() def JsonMediumDictType(): return mutable.MutableDict.as_mutable(JsonEncodedMediumText) def LongText(): # TODO(rakhmerov): Need to do for postgres. return sa.Text().with_variant(mysql.LONGTEXT(), 'mysql') class JsonEncodedLongText(JsonEncoded): impl = LongText() def JsonLongDictType(): return mutable.MutableDict.as_mutable(JsonEncodedLongText) mistral-6.0.0/mistral/db/__init__.py0000666000175100017510000000000013245513261017357 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/db/utils.py0000666000175100017510000000645513245513261017004 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools from oslo_db import exception as db_exc from oslo_log import log as logging import tenacity from mistral import context from mistral import exceptions as exc from mistral.services import security LOG = logging.getLogger(__name__) _RETRY_ERRORS = (db_exc.DBDeadlock, db_exc.DBConnectionError) @tenacity.retry( retry=tenacity.retry_if_exception_type(_RETRY_ERRORS), stop=tenacity.stop_after_attempt(50), wait=tenacity.wait_incrementing(start=0, increment=0.1, max=2) ) def _with_auth_context(auth_ctx, func, *args, **kw): """Runs the given function with the specified auth context. :param auth_ctx: Authentication context. :param func: Function to run with the specified auth context. :param args: Function positional arguments. :param kw: Function keyword arguments. :return: Function result. """ old_auth_ctx = context.ctx() if context.has_ctx() else None context.set_ctx(auth_ctx) try: return func(*args, **kw) except _RETRY_ERRORS: LOG.exception( "DB error detected, operation will be retried: %s", func ) raise finally: context.set_ctx(old_auth_ctx) def retry_on_db_error(func): """Decorates the given function so that it retries on DB errors. Note that the decorator retries the function/method only on some of the DB errors that are considered to be worth retrying, like deadlocks and disconnections. :param func: Function to decorate. :return: Decorated function. """ @functools.wraps(func) def decorate(*args, **kw): # Retrying library decorator might potentially run a decorated # function within a new thread so it's safer not to apply the # decorator directly to a target method/function because we can # lose an authentication context. # The solution is to create one more function and explicitly set # auth context before calling it (potentially in a new thread). auth_ctx = context.ctx() if context.has_ctx() else None return _with_auth_context(auth_ctx, func, *args, **kw) return decorate def check_db_obj_access(db_obj): """Check accessibility to db object.""" ctx = context.ctx() is_admin = ctx.is_admin if not is_admin and db_obj.project_id != security.get_project_id(): raise exc.NotAllowedException( "Can not access %s resource of other projects, ID: %s" % (db_obj.__class__.__name__, db_obj.id) ) if not is_admin and hasattr(db_obj, 'is_system') and db_obj.is_system: raise exc.InvalidActionException( "Can not modify a system %s resource, ID: %s" % (db_obj.__class__.__name__, db_obj.id) ) mistral-6.0.0/mistral/config.py0000666000175100017510000004157513245513272016530 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Configuration options registration and useful routines. """ import itertools import os from oslo_config import cfg from oslo_log import log from oslo_middleware import cors from osprofiler import opts as profiler from mistral import version from mistral._i18n import _ # Options under default group. launch_opt = cfg.ListOpt( 'server', default=['all'], help=_('Specifies which mistral server to start by the launch script. ' 'Valid options are all or any combination of ' 'api, engine, and executor.') ) wf_trace_log_name_opt = cfg.StrOpt( 'workflow_trace_log_name', default='workflow_trace', help=_('Logger name for pretty workflow trace output.') ) use_debugger_opt = cfg.BoolOpt( 'use-debugger', default=False, help=_('Enables debugger. Note that using this option changes how the ' 'eventlet library is used to support async IO. This could result ' 'in failures that do not occur under normal operation. ' 'Use at your own risk.') ) auth_type_opt = cfg.StrOpt( 'auth_type', default='keystone', help=_('Authentication type (valid options: keystone, keycloak-oidc)') ) api_opts = [ cfg.HostAddressOpt( 'host', default='0.0.0.0', help='Mistral API server host' ), cfg.PortOpt('port', default=8989, help='Mistral API server port'), cfg.BoolOpt( 'allow_action_execution_deletion', default=False, help=_('Enables the ability to delete action_execution which ' 'has no relationship with workflows.') ), cfg.BoolOpt( 'enable_ssl_api', default=False, help=_('Enable the integrated stand-alone API to service requests' 'via HTTPS instead of HTTP.') ), cfg.IntOpt( 'api_workers', help=_('Number of workers for Mistral API service ' 'default is equal to the number of CPUs available if that can ' 'be determined, else a default worker count of 1 is returned.') ) ] js_impl_opt = cfg.StrOpt( 'js_implementation', default='pyv8', choices=['pyv8', 'v8eval'], help=_('The JavaScript implementation to be used by the std.javascript ' 'action to evaluate scripts.') ) rpc_impl_opt = cfg.StrOpt( 'rpc_implementation', default='oslo', choices=['oslo', 'kombu'], help=_('Specifies RPC implementation for RPC client and server. ' 'Support of kombu driver is experimental.') ) # TODO(ddeja): This config option is a part of oslo RPCClient # It would be the best to not register it twice, rather use RPCClient somehow rpc_response_timeout_opt = cfg.IntOpt( 'rpc_response_timeout', default=60, help=_('Seconds to wait for a response from a call.') ) expiration_token_duration = cfg.IntOpt( 'expiration_token_duration', default=30, help=_('Window of seconds to determine whether the given token is about' ' to expire.') ) pecan_opts = [ cfg.StrOpt( 'root', default='mistral.api.controllers.root.RootController', help=_('Pecan root controller') ), cfg.ListOpt( 'modules', default=["mistral.api"], help=_('A list of modules where pecan will search for applications.') ), cfg.BoolOpt( 'debug', default=False, help=_('Enables the ability to display tracebacks in the browser and' ' interactively debug during development.') ), cfg.BoolOpt( 'auth_enable', default=True, help=_('Enables user authentication in pecan.') ) ] engine_opts = [ cfg.StrOpt('engine', default='default', help='Mistral engine plugin'), cfg.HostAddressOpt( 'host', default='0.0.0.0', help=_('Name of the engine node. This can be an opaque ' 'identifier. It is not necessarily a hostname, ' 'FQDN, or IP address.') ), cfg.StrOpt( 'topic', default='mistral_engine', help=_('The message topic that the engine listens on.') ), cfg.StrOpt('version', default='1.0', help='The version of the engine.'), cfg.IntOpt( 'execution_field_size_limit_kb', default=1024, help=_('The default maximum size in KB of large text fields ' 'of runtime execution objects. Use -1 for no limit.') ), cfg.IntOpt( 'execution_integrity_check_delay', default=20, help=_('A number of seconds since the last update of a task' ' execution in RUNNING state after which Mistral will' ' start checking its integrity, meaning that if all' ' associated actions/workflows are finished its state' ' will be restored automatically. If this property is' ' set to a negative value Mistral will never be doing ' ' this check.') ) ] executor_opts = [ cfg.StrOpt( 'type', choices=['local', 'remote'], default='remote', help=( 'Type of executor. Use local to run the executor within the ' 'engine server. Use remote if the executor is launched as ' 'a separate server to run action executions.' ) ), cfg.HostAddressOpt( 'host', default='0.0.0.0', help=_('Name of the executor node. This can be an opaque ' 'identifier. It is not necessarily a hostname, ' 'FQDN, or IP address.') ), cfg.StrOpt( 'topic', default='mistral_executor', help=_('The message topic that the executor listens on.') ), cfg.StrOpt( 'version', default='1.0', help=_('The version of the executor.') ) ] scheduler_opts = [ cfg.FloatOpt( 'fixed_delay', default=1, min=0.1, help=( 'Fixed part of the delay between scheduler iterations, ' 'in seconds. ' 'Full delay is defined as a sum of "fixed_delay" and a random ' 'delay limited by "random_delay".' ) ), cfg.FloatOpt( 'random_delay', default=0, min=0, help=( 'Max value of the random part of the delay between scheduler ' 'iterations, in seconds. ' 'Full delay is defined as a sum of "fixed_delay" and a random ' 'delay limited by this property.' ) ), cfg.IntOpt( 'batch_size', default=None, min=1, help=( 'The max number of delayed calls will be selected during ' 'a scheduler iteration. ' 'If this property equals None then there is no ' 'restriction on selection.' ) ) ] cron_trigger_opts = [ cfg.BoolOpt( 'enabled', default=True, help=( 'If this value is set to False then the subsystem of cron triggers' ' is disabled. Disabling cron triggers increases system' ' performance.' ) ), cfg.IntOpt( 'execution_interval', default=1, min=1, help=( 'This setting defines how frequently Mistral checks for cron ', 'triggers that need execution. By default this is every second ', 'which can lead to high system load. Increasing the number will ', 'reduce the load but also limit the minimum freqency. For ', 'example, a cron trigger can be configured to run every second ', 'but if the execution_interval is set to 60, it will only run ', 'once per minute.' ) ) ] event_engine_opts = [ cfg.HostAddressOpt( 'host', default='0.0.0.0', help=_('Name of the event engine node. This can be an opaque ' 'identifier. It is not necessarily a hostname, ' 'FQDN, or IP address.') ), cfg.HostAddressOpt( 'listener_pool_name', default='events', help=_('Name of the event engine\'s listener pool. This can be an' ' opaque identifier. It is used for identifying the group' ' of event engine listeners in oslo.messaging.') ), cfg.StrOpt( 'topic', default='mistral_event_engine', help=_('The message topic that the event engine listens on.') ), cfg.StrOpt( 'event_definitions_cfg_file', default='/etc/mistral/event_definitions.yaml', help=_('Configuration file for event definitions.') ), ] execution_expiration_policy_opts = [ cfg.IntOpt( 'evaluation_interval', help=_('How often will the executions be evaluated ' '(in minutes). For example for value 120 the interval ' 'will be 2 hours (every 2 hours).' 'Note that only final state executions will be removed: ' '( SUCCESS / ERROR / CANCELLED ).') ), cfg.IntOpt( 'older_than', help=_('Evaluate from which time remove executions in minutes. ' 'For example when older_than = 60, remove all executions ' 'that finished a 60 minutes ago or more. ' 'Minimum value is 1.') ), cfg.IntOpt( 'max_finished_executions', default=0, help=_('The maximum number of finished workflow executions' 'to be stored. For example when max_finished_executions = 100,' 'only the 100 latest finished executions will be preserved.' 'This means that even unexpired executions are eligible' 'for deletion, to decrease the number of executions in the' 'database. The default value is 0. If it is set to 0,' 'this constraint won\'t be applied.') ), cfg.IntOpt( 'batch_size', default=0, help=_('Size of batch of expired executions to be deleted.' 'The default value is 0. If it is set to 0, ' 'size of batch is total number of expired executions' 'that is going to be deleted.') ) ] coordination_opts = [ cfg.StrOpt( 'backend_url', help=_('The backend URL to be used for coordination') ), cfg.FloatOpt( 'heartbeat_interval', default=5.0, help=_('Number of seconds between heartbeats for coordination.') ) ] profiler_opts = profiler.list_opts()[0][1] profiler_opts.append( cfg.StrOpt( 'profiler_log_name', default='profiler_trace', help=_('Logger name for the osprofiler trace output.') ) ) keycloak_oidc_opts = [ cfg.StrOpt( 'auth_url', help=_('Keycloak base url (e.g. https://my.keycloak:8443/auth)') ), cfg.StrOpt( 'certfile', help=_('Required if identity server requires client certificate') ), cfg.StrOpt( 'keyfile', help=_('Required if identity server requires client certificate') ), cfg.StrOpt( 'cafile', help=_('A PEM encoded Certificate Authority to use when verifying ' 'HTTPs connections. Defaults to system CAs.') ), cfg.BoolOpt( 'insecure', default=False, help=_('If True, SSL/TLS certificate verification is disabled') ) ] openstack_actions_opts = [ cfg.StrOpt( 'os-actions-endpoint-type', default=os.environ.get('OS_ACTIONS_ENDPOINT_TYPE', 'public'), choices=['public', 'admin', 'internal'], deprecated_group='DEFAULT', help=_('Type of endpoint in identity service catalog to use for' ' communication with OpenStack services.') ), cfg.ListOpt( 'modules-support-region', default=['nova', 'glance', 'heat', 'neutron', 'cinder', 'trove', 'ironic', 'designate', 'murano', 'tacker', 'senlin', 'aodh', 'gnocchi'], help=_('List of module names that support region in actions.') ), cfg.StrOpt( 'default_region', help=_('Default region name for openstack actions supporting region.') ), ] # note: this command line option is used only from sync_db and # mistral-db-manage os_actions_mapping_path = cfg.StrOpt( 'openstack_actions_mapping_path', short='m', metavar='MAPPING_PATH', default='actions/openstack/mapping.json', help='Path to openstack action mapping json file.' 'It could be relative to mistral package ' 'directory or absolute.' ) CONF = cfg.CONF API_GROUP = 'api' ENGINE_GROUP = 'engine' EXECUTOR_GROUP = 'executor' SCHEDULER_GROUP = 'scheduler' CRON_TRIGGER_GROUP = 'cron_trigger' EVENT_ENGINE_GROUP = 'event_engine' PECAN_GROUP = 'pecan' COORDINATION_GROUP = 'coordination' EXECUTION_EXPIRATION_POLICY_GROUP = 'execution_expiration_policy' PROFILER_GROUP = profiler.list_opts()[0][0] KEYCLOAK_OIDC_GROUP = "keycloak_oidc" OPENSTACK_ACTIONS_GROUP = 'openstack_actions' CONF.register_opt(wf_trace_log_name_opt) CONF.register_opt(auth_type_opt) CONF.register_opt(js_impl_opt) CONF.register_opt(rpc_impl_opt) CONF.register_opt(rpc_response_timeout_opt) CONF.register_opt(expiration_token_duration) CONF.register_opts(api_opts, group=API_GROUP) CONF.register_opts(engine_opts, group=ENGINE_GROUP) CONF.register_opts(executor_opts, group=EXECUTOR_GROUP) CONF.register_opts(scheduler_opts, group=SCHEDULER_GROUP) CONF.register_opts(cron_trigger_opts, group=CRON_TRIGGER_GROUP) CONF.register_opts( execution_expiration_policy_opts, group=EXECUTION_EXPIRATION_POLICY_GROUP ) CONF.register_opts(event_engine_opts, group=EVENT_ENGINE_GROUP) CONF.register_opts(pecan_opts, group=PECAN_GROUP) CONF.register_opts(coordination_opts, group=COORDINATION_GROUP) CONF.register_opts(profiler_opts, group=PROFILER_GROUP) CONF.register_opts(keycloak_oidc_opts, group=KEYCLOAK_OIDC_GROUP) CONF.register_opts(openstack_actions_opts, group=OPENSTACK_ACTIONS_GROUP) CLI_OPTS = [ use_debugger_opt, launch_opt ] default_group_opts = itertools.chain( CLI_OPTS, [ wf_trace_log_name_opt, auth_type_opt, js_impl_opt, rpc_impl_opt, rpc_response_timeout_opt, expiration_token_duration ] ) CONF.register_cli_opts(CLI_OPTS) _DEFAULT_LOG_LEVELS = [ 'eventlet.wsgi.server=WARN', 'oslo_service.periodic_task=INFO', 'oslo_service.loopingcall=INFO', 'mistral.services.periodic=INFO', 'kazoo.client=WARN', 'oslo_db=WARN' ] def list_opts(): return [ (API_GROUP, api_opts), (ENGINE_GROUP, engine_opts), (EXECUTOR_GROUP, executor_opts), (EVENT_ENGINE_GROUP, event_engine_opts), (SCHEDULER_GROUP, scheduler_opts), (CRON_TRIGGER_GROUP, cron_trigger_opts), (PECAN_GROUP, pecan_opts), (COORDINATION_GROUP, coordination_opts), (EXECUTION_EXPIRATION_POLICY_GROUP, execution_expiration_policy_opts), (PROFILER_GROUP, profiler_opts), (KEYCLOAK_OIDC_GROUP, keycloak_oidc_opts), (OPENSTACK_ACTIONS_GROUP, openstack_actions_opts), (None, default_group_opts) ] def parse_args(args=None, usage=None, default_config_files=None): default_log_levels = log.get_default_log_levels() default_log_levels.extend(_DEFAULT_LOG_LEVELS) log.set_defaults(default_log_levels=default_log_levels) log.register_options(CONF) CONF( args=args, project='mistral', version=version.version_string, usage=usage, default_config_files=default_config_files ) def set_config_defaults(): """This method updates all configuration default values.""" set_cors_middleware_defaults() def set_cors_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-Project-Id', 'X-User-Name', 'X-Project-Name'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'], expose_headers=['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-Project-Id', 'X-User-Name', 'X-Project-Name'] ) mistral-6.0.0/mistral/engine/0000775000175100017510000000000013245513604016137 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/engine/actions.py0000666000175100017510000005105713245513272020164 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc from oslo_config import cfg from oslo_log import log as logging from osprofiler import profiler import six from mistral.db.v2 import api as db_api from mistral.engine import action_queue from mistral.engine import utils as engine_utils from mistral.engine import workflow_handler as wf_handler from mistral import exceptions as exc from mistral.executors import base as exe from mistral import expressions as expr from mistral.lang import parser as spec_parser from mistral.services import action_manager as a_m from mistral.services import security from mistral import utils from mistral.utils import wf_trace from mistral.workflow import data_flow from mistral.workflow import states from mistral_lib import actions as ml_actions LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class Action(object): """Action. Represents a workflow action and defines interface that can be used by Mistral engine or its components in order to manipulate with actions. """ def __init__(self, action_def, action_ex=None, task_ex=None): self.action_def = action_def self.action_ex = action_ex self.task_ex = action_ex.task_execution if action_ex else task_ex @abc.abstractmethod def complete(self, result): """Complete action and process its result. :param result: Action result. """ raise NotImplementedError def fail(self, msg): assert self.action_ex # When we set an ERROR state we should safely set output value getting # w/o exceptions due to field size limitations. msg = utils.cut_by_kb( msg, cfg.CONF.engine.execution_field_size_limit_kb ) self.action_ex.state = states.ERROR self.action_ex.output = {'result': msg} def update(self, state): assert self.action_ex if state == states.PAUSED and self.is_sync(self.action_ex.input): raise exc.InvalidStateTransitionException( 'Transition to the PAUSED state is only supported ' 'for asynchronous action execution.' ) if not states.is_valid_transition(self.action_ex.state, state): raise exc.InvalidStateTransitionException( 'Invalid state transition from %s to %s.' % (self.action_ex.state, state) ) self.action_ex.state = state @abc.abstractmethod def schedule(self, input_dict, target, index=0, desc='', safe_rerun=False, timeout=None): """Schedule action run. This method is needed to schedule action run so its result can be received later by engine. In this sense it will be running in asynchronous mode from engine perspective (don't confuse with executor asynchrony when executor doesn't immediately send a result). :param timeout: a period of time in seconds after which execution of action will be interrupted :param input_dict: Action input. :param target: Target (group of action executors). :param index: Action execution index. Makes sense for some types. :param desc: Action execution description. :param safe_rerun: If true, action would be re-run if executor dies during execution. """ raise NotImplementedError @abc.abstractmethod def run(self, input_dict, target, index=0, desc='', save=True, safe_rerun=False, timeout=None): """Immediately run action. This method runs method w/o scheduling its run for a later time. From engine perspective action will be processed in synchronous mode. :param timeout: a period of time in seconds after which execution of action will be interrupted :param input_dict: Action input. :param target: Target (group of action executors). :param index: Action execution index. Makes sense for some types. :param desc: Action execution description. :param save: True if action execution object needs to be saved. :param safe_rerun: If true, action would be re-run if executor dies during execution. :return: Action output. """ raise NotImplementedError def validate_input(self, input_dict): """Validates action input parameters. :param input_dict: Dictionary with input parameters. """ raise NotImplementedError def is_sync(self, input_dict): """Determines if action is synchronous. :param input_dict: Dictionary with input parameters. """ return True def _create_action_execution(self, input_dict, runtime_ctx, desc='', action_ex_id=None): action_ex_id = action_ex_id or utils.generate_unicode_uuid() values = { 'id': action_ex_id, 'name': self.action_def.name, 'spec': self.action_def.spec, 'state': states.RUNNING, 'input': input_dict, 'runtime_context': runtime_ctx, 'description': desc } if self.task_ex: values.update({ 'task_execution_id': self.task_ex.id, 'workflow_name': self.task_ex.workflow_name, 'workflow_namespace': self.task_ex.workflow_namespace, 'workflow_id': self.task_ex.workflow_id, 'project_id': self.task_ex.project_id, }) else: values.update({ 'project_id': security.get_project_id(), }) self.action_ex = db_api.create_action_execution(values) if self.task_ex: # Add to collection explicitly so that it's in a proper # state within the current session. self.task_ex.action_executions.append(self.action_ex) @profiler.trace('action-log-result', hide_args=True) def _log_result(self, prev_state, result): state = self.action_ex.state def _result_msg(): if state == states.ERROR: return "error = %s" % utils.cut(result.error) return "result = %s" % utils.cut(result.data) if prev_state != state: wf_trace.info( None, "Action '%s' (%s)(task=%s) [%s -> %s, %s]" % (self.action_ex.name, self.action_ex.id, self.task_ex.name if self.task_ex else None, prev_state, state, _result_msg()) ) class PythonAction(Action): """Regular Python action.""" @profiler.trace('action-complete', hide_args=True) def complete(self, result): assert self.action_ex if states.is_completed(self.action_ex.state): return prev_state = self.action_ex.state if result.is_success(): self.action_ex.state = states.SUCCESS elif result.is_cancel(): self.action_ex.state = states.CANCELLED else: self.action_ex.state = states.ERROR self.action_ex.output = self._prepare_output(result).to_dict() self.action_ex.accepted = True self._log_result(prev_state, result) @profiler.trace('action-schedule', hide_args=True) def schedule(self, input_dict, target, index=0, desc='', safe_rerun=False, timeout=None): assert not self.action_ex # Assign the action execution ID here to minimize database calls. # Otherwise, the input property of the action execution DB object needs # to be updated with the action execution ID after the action execution # DB object is created. action_ex_id = utils.generate_unicode_uuid() self._create_action_execution( self._prepare_input(input_dict), self._prepare_runtime_context(index, safe_rerun), desc=desc, action_ex_id=action_ex_id ) execution_context = self._prepare_execution_context() action_queue.schedule_run_action( self.action_ex, self.action_def, target, execution_context, timeout=timeout ) @profiler.trace('action-run', hide_args=True) def run(self, input_dict, target, index=0, desc='', save=True, safe_rerun=False, timeout=None): assert not self.action_ex input_dict = self._prepare_input(input_dict) runtime_ctx = self._prepare_runtime_context(index, safe_rerun) # Assign the action execution ID here to minimize database calls. # Otherwise, the input property of the action execution DB object needs # to be updated with the action execution ID after the action execution # DB object is created. action_ex_id = utils.generate_unicode_uuid() if save: self._create_action_execution( input_dict, runtime_ctx, desc=desc, action_ex_id=action_ex_id ) executor = exe.get_executor(cfg.CONF.executor.type) execution_context = self._prepare_execution_context() result = executor.run_action( self.action_ex.id if self.action_ex else None, self.action_def.action_class, self.action_def.attributes or {}, input_dict, safe_rerun, execution_context, target=target, async_=False, timeout=timeout ) return self._prepare_output(result) def is_sync(self, input_dict): input_dict = self._prepare_input(input_dict) a = a_m.get_action_class(self.action_def.name)(**input_dict) return a.is_sync() def validate_input(self, input_dict): # NOTE(kong): Don't validate action input if action initialization # method contains ** argument. if '**' in self.action_def.input: return expected_input = utils.get_dict_from_string(self.action_def.input) engine_utils.validate_input( expected_input, input_dict, self.action_def.name, self.action_def.action_class ) def _prepare_execution_context(self): exc_ctx = {} if self.task_ex: wf_ex = self.task_ex.workflow_execution exc_ctx['workflow_execution_id'] = wf_ex.id exc_ctx['task_id'] = self.task_ex.id exc_ctx['workflow_name'] = wf_ex.name if self.action_ex: exc_ctx['action_execution_id'] = self.action_ex.id callback_url = '/v2/action_executions/%s' % self.action_ex.id exc_ctx['callback_url'] = callback_url return exc_ctx def _prepare_input(self, input_dict): """Template method to do manipulations with input parameters. Python action doesn't do anything specific with initial input. """ return input_dict def _prepare_output(self, result): """Template method to do manipulations with action result. Python action doesn't do anything specific with result. """ return result def _prepare_runtime_context(self, index, safe_rerun): """Template method to prepare action runtime context. Python action inserts index into runtime context and information if given action is safe_rerun. """ return {'index': index, 'safe_rerun': safe_rerun} class AdHocAction(PythonAction): """Ad-hoc action.""" def __init__(self, action_def, action_ex=None, task_ex=None, task_ctx=None, wf_ctx=None): self.action_spec = spec_parser.get_action_spec(action_def.spec) try: base_action_def = db_api.get_action_definition( self.action_spec.get_base() ) except exc.DBEntityNotFoundError: raise exc.InvalidActionException( "Failed to find action [action_name=%s]" % self.action_spec.get_base() ) base_action_def = self._gather_base_actions( action_def, base_action_def ) super(AdHocAction, self).__init__( base_action_def, action_ex, task_ex ) self.adhoc_action_def = action_def self.task_ctx = task_ctx or {} self.wf_ctx = wf_ctx or {} def validate_input(self, input_dict): expected_input = self.action_spec.get_input() engine_utils.validate_input( expected_input, input_dict, self.adhoc_action_def.name, self.action_spec.__class__.__name__ ) super(AdHocAction, self).validate_input( self._prepare_input(input_dict) ) def _prepare_input(self, input_dict): base_input_dict = input_dict for action_def in self.adhoc_action_defs: action_spec = spec_parser.get_action_spec(action_def.spec) for k, v in action_spec.get_input().items(): if (k not in base_input_dict or base_input_dict[k] is utils.NotDefined): base_input_dict[k] = v base_input_expr = action_spec.get_base_input() if base_input_expr: ctx_view = data_flow.ContextView( base_input_dict, self.task_ctx, self.wf_ctx ) base_input_dict = expr.evaluate_recursively( base_input_expr, ctx_view ) else: base_input_dict = {} return super(AdHocAction, self)._prepare_input(base_input_dict) def _prepare_output(self, result): # In case of error, we don't transform a result. if not result.is_error(): for action_def in reversed(self.adhoc_action_defs): adhoc_action_spec = spec_parser.get_action_spec( action_def.spec ) transformer = adhoc_action_spec.get_output() if transformer is not None: result = ml_actions.Result( data=expr.evaluate_recursively(transformer, result.data), error=result.error ) return result def _prepare_runtime_context(self, index, safe_rerun): ctx = super(AdHocAction, self)._prepare_runtime_context( index, safe_rerun ) # Insert special field into runtime context so that we track # a relationship between python action and adhoc action. return utils.merge_dicts( ctx, {'adhoc_action_name': self.adhoc_action_def.name} ) def _gather_base_actions(self, action_def, base_action_def): """Find all base ad-hoc actions and store them An ad-hoc action may be based on another ad-hoc action (and this recursively). Using twice the same base action is not allowed to avoid infinite loops. It stores the list of ad-hoc actions. :param action_def: Action definition :type action_def: ActionDefinition :param base_action_def: Original base action definition :type base_action_def: ActionDefinition :return: The definition of the base system action :rtype: ActionDefinition """ self.adhoc_action_defs = [action_def] original_base_name = self.action_spec.get_name() action_names = set([original_base_name]) base = base_action_def while not base.is_system and base.name not in action_names: action_names.add(base.name) self.adhoc_action_defs.append(base) base_name = base.spec['base'] try: base = db_api.get_action_definition(base_name) except exc.DBEntityNotFoundError: raise exc.InvalidActionException( "Failed to find action [action_name=%s]" % base_name ) # if the action is repeated if base.name in action_names: raise ValueError( 'An ad-hoc action cannot use twice the same action, %s is ' 'used at least twice' % base.name ) return base class WorkflowAction(Action): """Workflow action.""" def __init__(self, wf_name, **kwargs): super(WorkflowAction, self).__init__(None, **kwargs) self.wf_name = wf_name @profiler.trace('workflow-action-complete', hide_args=True) def complete(self, result): # No-op because in case of workflow result is already processed. pass @profiler.trace('workflkow-action-schedule', hide_args=True) def schedule(self, input_dict, target, index=0, desc='', safe_rerun=False, timeout=None): assert not self.action_ex parent_wf_ex = self.task_ex.workflow_execution parent_wf_spec = spec_parser.get_workflow_spec_by_execution_id( parent_wf_ex.id ) wf_def = engine_utils.resolve_workflow_definition( parent_wf_ex.workflow_name, parent_wf_spec.get_name(), namespace=parent_wf_ex.params['namespace'], wf_spec_name=self.wf_name ) wf_spec = spec_parser.get_workflow_spec_by_definition_id( wf_def.id, wf_def.updated_at ) # If the parent has a root_execution_id, it must be a sub-workflow. So # we should propogate that ID down. Otherwise the parent must be the # root execution and we should use the parents ID. root_execution_id = parent_wf_ex.root_execution_id or parent_wf_ex.id wf_params = { 'root_execution_id': root_execution_id, 'task_execution_id': self.task_ex.id, 'index': index, 'namespace': parent_wf_ex.params['namespace'] } if 'env' in parent_wf_ex.params: wf_params['env'] = parent_wf_ex.params['env'] wf_params['evaluate_env'] = parent_wf_ex.params.get('evaluate_env') for k, v in list(input_dict.items()): if k not in wf_spec.get_input(): wf_params[k] = v del input_dict[k] wf_handler.start_workflow( wf_def.id, wf_def.namespace, None, input_dict, "sub-workflow execution", wf_params ) @profiler.trace('workflow-action-run', hide_args=True) def run(self, input_dict, target, index=0, desc='', save=True, safe_rerun=True, timeout=None): raise NotImplementedError('Does not apply to this WorkflowAction.') def is_sync(self, input_dict): # Workflow action is always asynchronous. return False def validate_input(self, input_dict): # TODO(rakhmerov): Implement. pass def resolve_action_definition(action_spec_name, wf_name=None, wf_spec_name=None): """Resolve action definition accounting for ad-hoc action namespacing. :param action_spec_name: Action name according to a spec. :param wf_name: Workflow name. :param wf_spec_name: Workflow name according to a spec. :return: Action definition (python or ad-hoc). """ action_db = None if wf_name and wf_name != wf_spec_name: # If workflow belongs to a workbook then check # action within the same workbook (to be able to # use short names within workbooks). # If it doesn't exist then use a name from spec # to find an action in DB. wb_name = wf_name.rstrip(wf_spec_name)[:-1] action_full_name = "%s.%s" % (wb_name, action_spec_name) action_db = db_api.load_action_definition(action_full_name) if not action_db: action_db = db_api.load_action_definition(action_spec_name) if not action_db: raise exc.InvalidActionException( "Failed to find action [action_name=%s]" % action_spec_name ) return action_db mistral-6.0.0/mistral/engine/workflow_handler.py0000666000175100017510000002332713245513261022070 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from oslo_utils import timeutils from osprofiler import profiler import traceback as tb from mistral.db import utils as db_utils from mistral.db.v2 import api as db_api from mistral.engine import action_queue from mistral.engine import workflows from mistral import exceptions as exc from mistral.services import scheduler from mistral.workflow import states LOG = logging.getLogger(__name__) CONF = cfg.CONF _CHECK_AND_COMPLETE_PATH = ( 'mistral.engine.workflow_handler._check_and_complete' ) @profiler.trace('workflow-handler-start-workflow', hide_args=True) def start_workflow(wf_identifier, wf_namespace, wf_ex_id, wf_input, desc, params): wf = workflows.Workflow() wf_def = db_api.get_workflow_definition(wf_identifier, wf_namespace) if 'namespace' not in params: params['namespace'] = wf_def.namespace wf.start( wf_def=wf_def, wf_ex_id=wf_ex_id, input_dict=wf_input, desc=desc, params=params ) _schedule_check_and_complete(wf.wf_ex) return wf.wf_ex def stop_workflow(wf_ex, state, msg=None): wf = workflows.Workflow(wf_ex=wf_ex) # In this case we should not try to handle possible errors. Instead, # we need to let them pop up since the typical way of failing objects # doesn't work here. Failing a workflow is the same as stopping it # with ERROR state. wf.stop(state, msg) # Cancels subworkflows. if state == states.CANCELLED: for task_ex in wf_ex.task_executions: sub_wf_exs = db_api.get_workflow_executions( task_execution_id=task_ex.id ) for sub_wf_ex in sub_wf_exs: if not states.is_completed(sub_wf_ex.state): stop_workflow(sub_wf_ex, state, msg=msg) def force_fail_workflow(wf_ex, msg=None): stop_workflow(wf_ex, states.ERROR, msg) def cancel_workflow(wf_ex, msg=None): stop_workflow(wf_ex, states.CANCELLED, msg) @db_utils.retry_on_db_error @action_queue.process @profiler.trace('workflow-handler-check-and-complete', hide_args=True) def _check_and_complete(wf_ex_id): # Note: This method can only be called via scheduler. with db_api.transaction(): wf_ex = db_api.load_workflow_execution(wf_ex_id) if not wf_ex or states.is_completed(wf_ex.state): return wf = workflows.Workflow(wf_ex=wf_ex) incomplete_tasks_count = 0 try: check_and_fix_integrity(wf_ex) incomplete_tasks_count = wf.check_and_complete() except exc.MistralException as e: msg = ( "Failed to check and complete [wf_ex_id=%s, wf_name=%s]:" " %s\n%s" % (wf_ex_id, wf_ex.name, e, tb.format_exc()) ) LOG.error(msg) force_fail_workflow(wf.wf_ex, msg) return finally: if states.is_completed(wf_ex.state): return # Let's assume that a task takes 0.01 sec in average to complete # and based on this assumption calculate a time of the next check. # The estimation is very rough but this delay will be decreasing # as tasks will be completing which will give a decent # approximation. # For example, if a workflow has 100 incomplete tasks then the # next check call will happen in 1 second. For 500 tasks it will # be 5 seconds. The larger the workflow is, the more beneficial # this mechanism will be. delay = ( int(incomplete_tasks_count * 0.01) if incomplete_tasks_count else 4 ) _schedule_check_and_complete(wf_ex, delay) @profiler.trace('workflow-handler-check-and-fix-integrity') def check_and_fix_integrity(wf_ex): check_after_seconds = CONF.engine.execution_integrity_check_delay if check_after_seconds < 0: # Never check integrity if it's a negative value. return # To break cyclic dependency. from mistral.engine import task_handler running_task_execs = db_api.get_task_executions( workflow_execution_id=wf_ex.id, state=states.RUNNING ) for t_ex in running_task_execs: # The idea is that we take the latest known timestamp of the task # execution and consider it eligible for checking and fixing only # if some minimum period of time elapsed since the last update. timestamp = t_ex.updated_at or t_ex.created_at delta = timeutils.delta_seconds(timestamp, timeutils.utcnow()) if delta < check_after_seconds: continue child_executions = t_ex.executions if not child_executions: continue all_finished = all( [states.is_completed(c_ex.state) for c_ex in child_executions] ) if all_finished: # Find the timestamp of the most recently finished child. most_recent_child_timestamp = max( [c_ex.updated_at or c_ex.created_at for c_ex in child_executions] ) interval = timeutils.delta_seconds( most_recent_child_timestamp, timeutils.utcnow() ) if interval > check_after_seconds: # We found a task execution in RUNNING state for which all # child executions are finished. We need to call # "schedule_on_action_complete" on the task handler for any of # the child executions so that the task state is calculated and # updated properly. LOG.warning( "Found a task execution that is likely stuck in RUNNING" " state because all child executions are finished," " will try to recover [task_execution=%s]", t_ex.id ) task_handler.schedule_on_action_complete(child_executions[-1]) def pause_workflow(wf_ex, msg=None): # Pause subworkflows first. for task_ex in wf_ex.task_executions: sub_wf_exs = db_api.get_workflow_executions( task_execution_id=task_ex.id ) for sub_wf_ex in sub_wf_exs: if not states.is_completed(sub_wf_ex.state): pause_workflow(sub_wf_ex, msg=msg) # If all subworkflows paused successfully, pause the main workflow. # If any subworkflows failed to pause for temporary reason, this # allows pause to be executed again on the main workflow. wf = workflows.Workflow(wf_ex=wf_ex) wf.pause(msg=msg) def rerun_workflow(wf_ex, task_ex, reset=True, env=None): if wf_ex.state == states.PAUSED: return wf_ex.get_clone() wf = workflows.Workflow(wf_ex=wf_ex) wf.rerun(task_ex, reset=reset, env=env) _schedule_check_and_complete(wf_ex) if wf_ex.task_execution_id: _schedule_check_and_complete(wf_ex.task_execution.workflow_execution) def resume_workflow(wf_ex, env=None): if not states.is_paused_or_idle(wf_ex.state): return wf_ex.get_clone() # Resume subworkflows first. for task_ex in wf_ex.task_executions: sub_wf_exs = db_api.get_workflow_executions( task_execution_id=task_ex.id ) for sub_wf_ex in sub_wf_exs: if not states.is_completed(sub_wf_ex.state): resume_workflow(sub_wf_ex) # Resume current workflow here so to trigger continue workflow only # after all other subworkflows are placed back in running state. wf = workflows.Workflow(wf_ex=wf_ex) wf.resume(env=env) @profiler.trace('workflow-handler-set-state', hide_args=True) def set_workflow_state(wf_ex, state, msg=None): if states.is_completed(state): stop_workflow(wf_ex, state, msg) elif states.is_paused(state): pause_workflow(wf_ex, msg) else: raise exc.MistralError( 'Invalid workflow execution state [wf_ex_id=%s, wf_name=%s, ' 'state=%s]' % (wf_ex.id, wf_ex.name, state) ) def _get_completion_check_key(wf_ex): return 'wfh_on_c_a_c-%s' % wf_ex.id @profiler.trace('workflow-handler-schedule-check-and-complete', hide_args=True) def _schedule_check_and_complete(wf_ex, delay=0): """Schedules workflow completion check. This method provides transactional decoupling of task completion from workflow completion check. It's needed in non-locking model in order to avoid 'phantom read' phenomena when reading state of multiple tasks to see if a workflow is completed. Just starting a separate transaction without using scheduler is not safe due to concurrency window that we'll have in this case (time between transactions) whereas scheduler is a special component that is designed to be resistant to failures. :param wf_ex: Workflow execution. :param delay: Minimum amount of time before task completion check should be made. """ key = _get_completion_check_key(wf_ex) scheduler.schedule_call( None, _CHECK_AND_COMPLETE_PATH, delay, key=key, wf_ex_id=wf_ex.id ) mistral-6.0.0/mistral/engine/policies.py0000666000175100017510000003436313245513272020334 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db import utils as db_utils from mistral.db.v2 import api as db_api from mistral.engine import action_queue from mistral.engine import base from mistral.engine import workflow_handler as wf_handler from mistral import expressions from mistral.services import scheduler from mistral.utils import wf_trace from mistral.workflow import data_flow from mistral.workflow import states import six _CONTINUE_TASK_PATH = 'mistral.engine.policies._continue_task' _COMPLETE_TASK_PATH = 'mistral.engine.policies._complete_task' _FAIL_IF_INCOMPLETE_TASK_PATH = \ 'mistral.engine.policies._fail_task_if_incomplete' def _log_task_delay(task_ex, delay_sec): wf_trace.info( task_ex, "Task '%s' [%s -> %s, delay = %s sec]" % (task_ex.name, task_ex.state, states.RUNNING_DELAYED, delay_sec) ) def build_policies(policies_spec, wf_spec): task_defaults = wf_spec.get_task_defaults() wf_policies = task_defaults.get_policies() if task_defaults else None if not (policies_spec or wf_policies): return [] return construct_policies_list(policies_spec, wf_policies) def get_policy_factories(): return [ build_pause_before_policy, build_wait_before_policy, build_wait_after_policy, build_retry_policy, build_timeout_policy, build_concurrency_policy ] def construct_policies_list(policies_spec, wf_policies): policies = [] for factory in get_policy_factories(): policy = factory(policies_spec) if wf_policies and not policy: policy = factory(wf_policies) if policy: policies.append(policy) return policies def build_wait_before_policy(policies_spec): if not policies_spec: return None wait_before = policies_spec.get_wait_before() if isinstance(wait_before, six.string_types) or wait_before > 0: return WaitBeforePolicy(wait_before) else: return None def build_wait_after_policy(policies_spec): if not policies_spec: return None wait_after = policies_spec.get_wait_after() if isinstance(wait_after, six.string_types) or wait_after > 0: return WaitAfterPolicy(wait_after) else: return None def build_retry_policy(policies_spec): if not policies_spec: return None retry = policies_spec.get_retry() if not retry: return None return RetryPolicy( retry.get_count(), retry.get_delay(), retry.get_break_on(), retry.get_continue_on() ) def build_timeout_policy(policies_spec): if not policies_spec: return None timeout_policy = policies_spec.get_timeout() if isinstance(timeout_policy, six.string_types) or timeout_policy > 0: return TimeoutPolicy(timeout_policy) else: return None def build_pause_before_policy(policies_spec): if not policies_spec: return None pause_before_policy = policies_spec.get_pause_before() return (PauseBeforePolicy(pause_before_policy) if pause_before_policy else None) def build_concurrency_policy(policies_spec): if not policies_spec: return None concurrency_policy = policies_spec.get_concurrency() return (ConcurrencyPolicy(concurrency_policy) if concurrency_policy else None) def _ensure_context_has_key(runtime_context, key): if not runtime_context: runtime_context = {} if key not in runtime_context: runtime_context.update({key: {}}) return runtime_context class WaitBeforePolicy(base.TaskPolicy): _schema = { "properties": { "delay": { "type": "integer", "minimum": 0 } } } def __init__(self, delay): self.delay = delay def before_task_start(self, task_ex, task_spec): super(WaitBeforePolicy, self).before_task_start(task_ex, task_spec) # No need to wait for a task if delay is 0 if self.delay == 0: return context_key = 'wait_before_policy' runtime_context = _ensure_context_has_key( task_ex.runtime_context, context_key ) task_ex.runtime_context = runtime_context policy_context = runtime_context[context_key] if policy_context.get('skip'): # Unset state 'RUNNING_DELAYED'. wf_trace.info( task_ex, "Task '%s' [%s -> %s]" % (task_ex.name, states.RUNNING_DELAYED, states.RUNNING) ) task_ex.state = states.RUNNING return if task_ex.state != states.IDLE: policy_context.update({'skip': True}) _log_task_delay(task_ex, self.delay) task_ex.state = states.RUNNING_DELAYED scheduler.schedule_call( None, _CONTINUE_TASK_PATH, self.delay, task_ex_id=task_ex.id, ) class WaitAfterPolicy(base.TaskPolicy): _schema = { "properties": { "delay": { "type": "integer", "minimum": 0 } } } def __init__(self, delay): self.delay = delay def after_task_complete(self, task_ex, task_spec): super(WaitAfterPolicy, self).after_task_complete(task_ex, task_spec) # No need to postpone a task if delay is 0 if self.delay == 0: return context_key = 'wait_after_policy' runtime_context = _ensure_context_has_key( task_ex.runtime_context, context_key ) task_ex.runtime_context = runtime_context policy_context = runtime_context[context_key] if policy_context.get('skip'): # Skip, already processed. return policy_context.update({'skip': True}) _log_task_delay(task_ex, self.delay) end_state = task_ex.state end_state_info = task_ex.state_info # TODO(rakhmerov): Policies probably needs to have tasks.Task # interface in order to change manage task state safely. # Set task state to 'DELAYED'. task_ex.state = states.RUNNING_DELAYED task_ex.state_info = ( 'Suspended by wait-after policy for %s seconds' % self.delay ) # Schedule to change task state to RUNNING again. scheduler.schedule_call( None, _COMPLETE_TASK_PATH, self.delay, task_ex_id=task_ex.id, state=end_state, state_info=end_state_info ) class RetryPolicy(base.TaskPolicy): _schema = { "properties": { "delay": { "type": "integer", "minimum": 0 }, "count": { "type": "integer", "minimum": 0 }, } } def __init__(self, count, delay, break_on, continue_on): self.count = count self.delay = delay self._break_on_clause = break_on self._continue_on_clause = continue_on def after_task_complete(self, task_ex, task_spec): """Possible Cases: 1. state = SUCCESS if continue_on is not specified, no need to move to next iteration; if current:count achieve retry:count then policy breaks the loop (regardless on continue-on condition); otherwise - check continue_on condition and if it is True - schedule the next iteration, otherwise policy breaks the loop. 2. retry:count = 5, current:count = 2, state = ERROR, state = IDLE/DELAYED, current:count = 3 3. retry:count = 5, current:count = 4, state = ERROR Iterations complete therefore state = #{state}, current:count = 4. """ super(RetryPolicy, self).after_task_complete(task_ex, task_spec) # There is nothing to repeat if self.count == 0: return # TODO(m4dcoder): If the task_ex.action_executions and # task_ex.workflow_executions collection are not called, # then the retry_no in the runtime_context of the task_ex will not # be updated accurately. To be exact, the retry_no will be one # iteration behind. ex = task_ex.executions # noqa context_key = 'retry_task_policy' runtime_context = _ensure_context_has_key( task_ex.runtime_context, context_key ) wf_ex = task_ex.workflow_execution ctx_view = data_flow.ContextView( data_flow.evaluate_task_outbound_context(task_ex), wf_ex.context, wf_ex.input ) continue_on_evaluation = expressions.evaluate( self._continue_on_clause, ctx_view ) break_on_evaluation = expressions.evaluate( self._break_on_clause, ctx_view ) task_ex.runtime_context = runtime_context state = task_ex.state if not states.is_completed(state) or states.is_cancelled(state): return policy_context = runtime_context[context_key] retry_no = 0 if 'retry_no' in policy_context: retry_no = policy_context['retry_no'] del policy_context['retry_no'] retries_remain = retry_no < self.count stop_continue_flag = ( task_ex.state == states.SUCCESS and not self._continue_on_clause ) stop_continue_flag = ( stop_continue_flag or (self._continue_on_clause and not continue_on_evaluation) ) break_triggered = ( task_ex.state == states.ERROR and break_on_evaluation ) if not retries_remain or break_triggered or stop_continue_flag: return _log_task_delay(task_ex, self.delay) data_flow.invalidate_task_execution_result(task_ex) task_ex.state = states.RUNNING_DELAYED policy_context['retry_no'] = retry_no + 1 runtime_context[context_key] = policy_context scheduler.schedule_call( None, _CONTINUE_TASK_PATH, self.delay, task_ex_id=task_ex.id, ) class TimeoutPolicy(base.TaskPolicy): _schema = { "properties": { "delay": { "type": "integer", "minimum": 0 } } } def __init__(self, timeout_sec): self.delay = timeout_sec def before_task_start(self, task_ex, task_spec): super(TimeoutPolicy, self).before_task_start(task_ex, task_spec) # No timeout if delay is 0 if self.delay == 0: return scheduler.schedule_call( None, _FAIL_IF_INCOMPLETE_TASK_PATH, self.delay, task_ex_id=task_ex.id, timeout=self.delay ) wf_trace.info( task_ex, "Timeout check scheduled [task=%s, timeout(s)=%s]." % (task_ex.id, self.delay) ) class PauseBeforePolicy(base.TaskPolicy): _schema = { "properties": { "expr": {"type": "boolean"} } } def __init__(self, expression): self.expr = expression def before_task_start(self, task_ex, task_spec): super(PauseBeforePolicy, self).before_task_start(task_ex, task_spec) if not self.expr: return wf_trace.info( task_ex, "Workflow paused before task '%s' [%s -> %s]" % (task_ex.name, task_ex.workflow_execution.state, states.PAUSED) ) task_ex.state = states.IDLE wf_handler.pause_workflow(task_ex.workflow_execution) class ConcurrencyPolicy(base.TaskPolicy): _schema = { "properties": { "concurrency": { "type": "integer", "minimum": 0 } } } def __init__(self, concurrency): self.concurrency = concurrency def before_task_start(self, task_ex, task_spec): super(ConcurrencyPolicy, self).before_task_start(task_ex, task_spec) if self.concurrency == 0: return # This policy doesn't do anything except validating "concurrency" # property value and setting a variable into task runtime context. # This variable is then used to define how many action executions # may be started in parallel. context_key = 'concurrency' runtime_context = _ensure_context_has_key( task_ex.runtime_context, context_key ) runtime_context[context_key] = self.concurrency task_ex.runtime_context = runtime_context @db_utils.retry_on_db_error @action_queue.process def _continue_task(task_ex_id): from mistral.engine import task_handler with db_api.transaction(): task_handler.continue_task(db_api.get_task_execution(task_ex_id)) @db_utils.retry_on_db_error @action_queue.process def _complete_task(task_ex_id, state, state_info): from mistral.engine import task_handler with db_api.transaction(): task_handler.complete_task( db_api.get_task_execution(task_ex_id), state, state_info ) @db_utils.retry_on_db_error @action_queue.process def _fail_task_if_incomplete(task_ex_id, timeout): from mistral.engine import task_handler with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex_id) if not states.is_completed(task_ex.state): msg = 'Task timed out [timeout(s)=%s].' % timeout task_handler.complete_task( db_api.get_task_execution(task_ex_id), states.ERROR, msg ) mistral-6.0.0/mistral/engine/action_queue.py0000666000175100017510000001004113245513272021171 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet import functools from oslo_config import cfg from mistral import context from mistral.executors import base as exe from mistral.rpc import clients as rpc from mistral import utils _THREAD_LOCAL_NAME = "__action_queue_thread_local" # Action queue operations. _RUN_ACTION = "run_action" _ON_ACTION_COMPLETE = "on_action_complete" def _prepare(): utils.set_thread_local(_THREAD_LOCAL_NAME, list()) def _clear(): utils.set_thread_local(_THREAD_LOCAL_NAME, None) def _get_queue(): queue = utils.get_thread_local(_THREAD_LOCAL_NAME) if queue is None: raise RuntimeError( 'Action queue is not initialized for the current thread.' ' Most likely some transactional method is not decorated' ' with action_queue.process()' ) return queue def _process_queue(queue): executor = exe.get_executor(cfg.CONF.executor.type) for operation, args in queue: if operation == _RUN_ACTION: action_ex, action_def, target, execution_context, \ timeout = args executor.run_action( action_ex.id, action_def.action_class, action_def.attributes or {}, action_ex.input, action_ex.runtime_context.get('safe_rerun', False), execution_context, target=target, timeout=timeout ) elif operation == _ON_ACTION_COMPLETE: action_ex_id, result, wf_action = args rpc.get_engine_client().on_action_complete( action_ex_id, result, wf_action ) def process(func): """Decorator that processes (runs) all actions in the action queue. Various engine methods may cause new actions to be scheduled. All such methods must be decorated with this decorator. It makes sure to run all the actions in the queue and clean up the queue. """ @functools.wraps(func) def decorate(*args, **kw): _prepare() try: res = func(*args, **kw) queue = _get_queue() auth_ctx = context.ctx() if context.has_ctx() else None # NOTE(rakhmerov): Since we make RPC calls to the engine itself # we need to process the action queue asynchronously in a new # thread. Otherwise, if we have one engine process the engine # will may send a request to itself while already processing # another one. In conjunction with blocking RPC it will lead # to a deadlock (and RPC timeout). def _within_new_thread(): old_auth_ctx = context.ctx() if context.has_ctx() else None context.set_ctx(auth_ctx) try: _process_queue(queue) finally: context.set_ctx(old_auth_ctx) eventlet.spawn(_within_new_thread) finally: _clear() return res return decorate def schedule_run_action(action_ex, action_def, target, execution_context, timeout): args = (action_ex, action_def, target, execution_context, timeout) _get_queue().append((_RUN_ACTION, args)) def schedule_on_action_complete(action_ex_id, result, wf_action=False): _get_queue().append( (_ON_ACTION_COMPLETE, (action_ex_id, result, wf_action)) ) mistral-6.0.0/mistral/engine/dispatcher.py0000666000175100017510000000615113245513261020643 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools from osprofiler import profiler from mistral import exceptions as exc from mistral.workflow import commands from mistral.workflow import states def _compare_task_commands(a, b): if not isinstance(a, commands.RunTask) or not a.is_waiting(): return -1 if not isinstance(b, commands.RunTask) or not b.is_waiting(): return 1 if a.unique_key == b.unique_key: return 0 if a.unique_key < b.unique_key: return -1 return 1 def _rearrange_commands(cmds): """Takes workflow commands and does required pre-processing. The main idea of the method is to sort task commands with 'waiting' flag by 'unique_key' property in order guarantee the same locking order for them in parallel transactions and thereby prevent deadlocks. It also removes commands that don't make sense. For example, if there are some commands after a command that changes a workflow state then they must not be dispatched. """ # Remove all 'noop' commands. cmds = list([c for c in cmds if not isinstance(c, commands.Noop)]) state_cmd_idx = -1 state_cmd = None for i, cmd in enumerate(cmds): if isinstance(cmd, commands.SetWorkflowState): state_cmd_idx = i state_cmd = cmd break # Find a position of a 'fail|succeed|pause' command # and sort all task commands before it. if state_cmd_idx < 0: cmds.sort(key=functools.cmp_to_key(_compare_task_commands)) return cmds elif state_cmd_idx == 0: return cmds[0:1] res = cmds[0:state_cmd_idx] res.sort(key=functools.cmp_to_key(_compare_task_commands)) res.append(state_cmd) return res @profiler.trace('dispatcher-dispatch-commands', hide_args=True) def dispatch_workflow_commands(wf_ex, wf_cmds): # TODO(rakhmerov): I don't like these imports but otherwise we have # import cycles. from mistral.engine import task_handler from mistral.engine import workflow_handler as wf_handler if not wf_cmds: return for cmd in _rearrange_commands(wf_cmds): if isinstance(cmd, (commands.RunTask, commands.RunExistingTask)): task_handler.run_task(cmd) elif isinstance(cmd, commands.SetWorkflowState): wf_handler.set_workflow_state(wf_ex, cmd.new_state, cmd.msg) else: raise exc.MistralError('Unsupported workflow command: %s' % cmd) if wf_ex.state != states.RUNNING: break mistral-6.0.0/mistral/engine/base.py0000666000175100017510000001444213245513261017431 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import jsonschema import six from mistral import exceptions as exc from mistral.utils import inspect_utils from mistral.workflow import data_flow @six.add_metaclass(abc.ABCMeta) class Engine(object): """Engine interface.""" @abc.abstractmethod def start_workflow(self, wf_identifier, wf_namespace='', wf_ex_id=None, wf_input=None, description='', **params): """Starts the specified workflow. :param wf_identifier: Workflow ID or name. Workflow ID is recommended, workflow name will be deprecated since Mitaka. :param wf_namespace: Workflow namespace. :param wf_input: Workflow input data as a dictionary. :param wf_ex_id: Workflow execution id. If passed, it will be set in the new execution object. :param description: Execution description. :param params: Additional workflow type specific parameters. :return: Workflow execution object. """ raise NotImplementedError @abc.abstractmethod def start_action(self, action_name, action_input, description=None, **params): """Starts the specific action. :param action_name: Action name. :param action_input: Action input data as a dictionary. :param description: Execution description. :param params: Additional options for action running. :return: Action execution object. """ raise NotImplementedError @abc.abstractmethod def on_action_complete(self, action_ex_id, result, wf_action=False, async_=False): """Accepts action result and continues the workflow. Action execution result here is a result which comes from an action/workflow associated which the task. :param action_ex_id: Action execution id. :param result: Action/workflow result. Instance of mistral.workflow.base.Result :param wf_action: If True it means that the given id points to a workflow execution rather than action execution. It happens when a nested workflow execution sends its result to a parent workflow. :param async: If True, run action in asynchronous mode (w/o waiting for completion). :return: Action(or workflow if wf_action=True) execution object. """ raise NotImplementedError @abc.abstractmethod def pause_workflow(self, wf_ex_id): """Pauses workflow. :param wf_ex_id: Execution id. :return: Workflow execution object. """ raise NotImplementedError @abc.abstractmethod def resume_workflow(self, wf_ex_id, env=None): """Resumes workflow. :param wf_ex_id: Execution id. :param env: Workflow environment. :return: Workflow execution object. """ raise NotImplementedError @abc.abstractmethod def rerun_workflow(self, task_ex_id, reset=True, env=None): """Rerun workflow from the specified task. :param task_ex_id: Task execution id. :param reset: If True, reset task state including deleting its action executions. :param env: Workflow environment. :return: Workflow execution object. """ raise NotImplementedError @abc.abstractmethod def stop_workflow(self, wf_ex_id, state, message): """Stops workflow. :param wf_ex_id: Workflow execution id. :param state: State assigned to the workflow. Permitted states are SUCCESS or ERROR. :param message: Optional information string. :return: Workflow execution. """ raise NotImplementedError @abc.abstractmethod def rollback_workflow(self, wf_ex_id): """Rolls back workflow execution. :param wf_ex_id: Execution id. :return: Workflow execution object. """ raise NotImplementedError @six.add_metaclass(abc.ABCMeta) class TaskPolicy(object): """Task policy. Provides interface to perform any work after a task has completed. An example of task policy may be 'retry' policy that makes engine to run a task repeatedly if it finishes with a failure. """ _schema = {} def before_task_start(self, task_ex, task_spec): """Called right before task start. :param task_ex: DB model for task that is about to start. :param task_spec: Task specification. """ wf_ex = task_ex.workflow_execution ctx_view = data_flow.ContextView( task_ex.in_context, wf_ex.context, wf_ex.input ) data_flow.evaluate_object_fields(self, ctx_view) self._validate() def after_task_complete(self, task_ex, task_spec): """Called right after task completes. :param task_ex: Completed task DB model. :param task_spec: Completed task specification. """ wf_ex = task_ex.workflow_execution ctx_view = data_flow.ContextView( task_ex.in_context, wf_ex.context, wf_ex.input ) data_flow.evaluate_object_fields(self, ctx_view) self._validate() def _validate(self): """Validation of types after YAQL evaluation.""" props = inspect_utils.get_public_fields(self) try: jsonschema.validate(props, self._schema) except Exception as e: raise exc.InvalidModelException( "Invalid data type in %s: %s. Value(s) can be shown after " "YAQL evaluating. If you use YAQL here, please correct it." % (self.__class__.__name__, str(e)) ) mistral-6.0.0/mistral/engine/engine_server.py0000666000175100017510000002157113245513261021353 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from mistral import config as cfg from mistral.db.v2 import api as db_api from mistral.engine import default_engine from mistral.rpc import base as rpc from mistral.service import base as service_base from mistral.services import expiration_policy from mistral.services import scheduler from mistral import utils from mistral.utils import profiler as profiler_utils LOG = logging.getLogger(__name__) class EngineServer(service_base.MistralService): """Engine server. This class manages engine life-cycle and gets registered as an RPC endpoint to process engine specific calls. It also registers a cluster member associated with this instance of engine. """ def __init__(self, engine, setup_profiler=True): super(EngineServer, self).__init__('engine_group', setup_profiler) self.engine = engine self._rpc_server = None self._scheduler = None self._expiration_policy_tg = None def start(self): super(EngineServer, self).start() db_api.setup_db() self._scheduler = scheduler.start() self._expiration_policy_tg = expiration_policy.setup() if self._setup_profiler: profiler_utils.setup('mistral-engine', cfg.CONF.engine.host) # Initialize and start RPC server. self._rpc_server = rpc.get_rpc_server_driver()(cfg.CONF.engine) self._rpc_server.register_endpoint(self) # Note(ddeja): Engine needs to be run in default (blocking) mode # since using another mode may lead to a deadlock. # See https://review.openstack.org/#/c/356343 for more info. self._rpc_server.run(executor='blocking') self._notify_started('Engine server started.') def stop(self, graceful=False): super(EngineServer, self).stop(graceful) if self._scheduler: scheduler.stop_scheduler(self._scheduler, graceful) if self._expiration_policy_tg: self._expiration_policy_tg.stop(graceful) if self._rpc_server: self._rpc_server.stop(graceful) def start_workflow(self, rpc_ctx, wf_identifier, wf_namespace, wf_ex_id, wf_input, description, params): """Receives calls over RPC to start workflows on engine. :param rpc_ctx: RPC request context. :param wf_identifier: Workflow definition identifier. :param wf_namespace: Workflow definition identifier. :param wf_input: Workflow input. :param wf_ex_id: Workflow execution id. If passed, it will be set in the new execution object. :param description: Workflow execution description. :param params: Additional workflow type specific parameters. :return: Workflow execution. """ LOG.info( "Received RPC request 'start_workflow'[workflow_identifier=%s, " "workflow_input=%s, description=%s, params=%s]", wf_identifier, utils.cut(wf_input), description, params ) return self.engine.start_workflow( wf_identifier, wf_namespace, wf_ex_id, wf_input, description, **params ) def start_action(self, rpc_ctx, action_name, action_input, description, params): """Receives calls over RPC to start actions on engine. :param rpc_ctx: RPC request context. :param action_name: name of the Action. :param action_input: input dictionary for Action. :param description: description of new Action execution. :param params: extra parameters to run Action. :return: Action execution. """ LOG.info( "Received RPC request 'start_action'[name=%s, input=%s, " "description=%s, params=%s]", action_name, utils.cut(action_input), description, params ) return self.engine.start_action( action_name, action_input, description, **params ) def on_action_complete(self, rpc_ctx, action_ex_id, result, wf_action): """Receives RPC calls to communicate action result to engine. :param rpc_ctx: RPC request context. :param action_ex_id: Action execution id. :param result: Action result data. :param wf_action: True if given id points to a workflow execution. :return: Action execution. """ LOG.info( "Received RPC request 'on_action_complete'[action_ex_id=%s, " "result=%s]", action_ex_id, result.cut_repr() ) return self.engine.on_action_complete(action_ex_id, result, wf_action) def on_action_update(self, rpc_ctx, action_ex_id, state, wf_action): """Receives RPC calls to communicate action execution state to engine. :param rpc_ctx: RPC request context. :param action_ex_id: Action execution id. :param state: Action execution state. :param wf_action: True if given id points to a workflow execution. :return: Action execution. """ LOG.info( "Received RPC request 'on_action_update'" "[action_ex_id=%s, state=%s]", action_ex_id, state ) return self.engine.on_action_update(action_ex_id, state, wf_action) def pause_workflow(self, rpc_ctx, wf_ex_id): """Receives calls over RPC to pause workflows on engine. :param rpc_ctx: Request context. :param wf_ex_id: Workflow execution id. :return: Workflow execution. """ LOG.info( "Received RPC request 'pause_workflow'[execution_id=%s]", wf_ex_id ) return self.engine.pause_workflow(wf_ex_id) def rerun_workflow(self, rpc_ctx, task_ex_id, reset=True, env=None): """Receives calls over RPC to rerun workflows on engine. :param rpc_ctx: RPC request context. :param task_ex_id: Task execution id. :param reset: If true, then purge action execution for the task. :param env: Environment variables to update. :return: Workflow execution. """ LOG.info( "Received RPC request 'rerun_workflow'[task_ex_id=%s]", task_ex_id ) return self.engine.rerun_workflow(task_ex_id, reset, env) def resume_workflow(self, rpc_ctx, wf_ex_id, env=None): """Receives calls over RPC to resume workflows on engine. :param rpc_ctx: RPC request context. :param wf_ex_id: Workflow execution id. :param env: Environment variables to update. :return: Workflow execution. """ LOG.info( "Received RPC request 'resume_workflow'[wf_ex_id=%s]", wf_ex_id ) return self.engine.resume_workflow(wf_ex_id, env) def stop_workflow(self, rpc_ctx, wf_ex_id, state, message=None): """Receives calls over RPC to stop workflows on engine. Sets execution state to SUCCESS or ERROR. No more tasks will be scheduled. Running tasks won't be killed, but their results will be ignored. :param rpc_ctx: RPC request context. :param wf_ex_id: Workflow execution id. :param state: State assigned to the workflow. Permitted states are SUCCESS or ERROR. :param message: Optional information string. :return: Workflow execution. """ LOG.info( "Received RPC request 'stop_workflow'[execution_id=%s," " state=%s, message=%s]", wf_ex_id, state, message ) return self.engine.stop_workflow(wf_ex_id, state, message) def rollback_workflow(self, rpc_ctx, wf_ex_id): """Receives calls over RPC to rollback workflows on engine. :param rpc_ctx: RPC request context. :param wf_ex_id Workflow execution id. :return: Workflow execution. """ LOG.info( "Received RPC request 'rollback_workflow'[execution_id=%s]", wf_ex_id ) return self.engine.rollback_workflow(wf_ex_id) def get_oslo_service(setup_profiler=True): return EngineServer( default_engine.DefaultEngine(), setup_profiler=setup_profiler ) mistral-6.0.0/mistral/engine/tasks.py0000666000175100017510000006312713245513272017652 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import copy from oslo_log import log as logging from osprofiler import profiler import six from mistral.db.v2 import api as db_api from mistral.engine import actions from mistral.engine import dispatcher from mistral.engine import policies from mistral import exceptions as exc from mistral import expressions as expr from mistral import utils from mistral.utils import wf_trace from mistral.workflow import base as wf_base from mistral.workflow import data_flow from mistral.workflow import states LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class Task(object): """Task. Represents a workflow task and defines interface that can be used by Mistral engine or its components in order to manipulate with tasks. """ def __init__(self, wf_ex, wf_spec, task_spec, ctx, task_ex=None, unique_key=None, waiting=False, triggered_by=None): self.wf_ex = wf_ex self.task_spec = task_spec self.ctx = ctx self.task_ex = task_ex self.wf_spec = wf_spec self.unique_key = unique_key self.waiting = waiting self.triggered_by = triggered_by self.reset_flag = False self.created = False self.state_changed = False def is_completed(self): return self.task_ex and states.is_completed(self.task_ex.state) def is_waiting(self): return self.waiting def is_created(self): return self.created def is_state_changed(self): return self.state_changed @abc.abstractmethod def on_action_complete(self, action_ex): """Handle action completion. :param action_ex: Action execution. """ raise NotImplementedError @abc.abstractmethod def on_action_update(self, action_ex): """Handle action update. :param action_ex: Action execution. """ raise NotImplementedError @abc.abstractmethod def run(self): """Runs task.""" raise NotImplementedError @profiler.trace('task-defer') def defer(self): """Defers task. This method puts task to a waiting state. """ with db_api.named_lock(self.unique_key): if not self.task_ex: t_execs = db_api.get_task_executions( workflow_execution_id=self.wf_ex.id, unique_key=self.unique_key ) self.task_ex = t_execs[0] if t_execs else None msg = 'Task is waiting.' if not self.task_ex: self._create_task_execution( state=states.WAITING, state_info=msg ) elif self.task_ex.state != states.WAITING: self.set_state(states.WAITING, msg) def reset(self): self.reset_flag = True @profiler.trace('task-set-state') def set_state(self, state, state_info, processed=None): """Sets task state without executing post completion logic. :param state: New task state. :param state_info: New state information (i.e. error message). :param processed: New "processed" flag value. :return: True if the state was changed as a result of this call, False otherwise. """ assert self.task_ex cur_state = self.task_ex.state if cur_state != state or self.task_ex.state_info != state_info: task_ex = db_api.update_task_execution_state( id=self.task_ex.id, cur_state=cur_state, state=state ) if task_ex is None: # Do nothing because the update query did not change the DB. return False self.task_ex = task_ex self.task_ex.state_info = state_info self.state_changed = True if processed is not None: self.task_ex.processed = processed wf_trace.info( self.task_ex.workflow_execution, "Task '%s' (%s) [%s -> %s, msg=%s]" % (self.task_ex.name, self.task_ex.id, cur_state, state, state_info) ) return True @profiler.trace('task-complete') def complete(self, state, state_info=None): """Complete task and set specified state. Method sets specified task state and runs all necessary post completion logic such as publishing workflow variables and scheduling new workflow commands. :param state: New task state. :param state_info: New state information (i.e. error message). """ assert self.task_ex # Ignore if task already completed. if self.is_completed(): return # If we were unable to change the task state it means that it was # already changed by a concurrent process. In this case we need to # skip all regular completion logic like scheduling new tasks, # running engine commands and publishing. if not self.set_state(state, state_info): return data_flow.publish_variables(self.task_ex, self.task_spec) if not self.task_spec.get_keep_result(): # Destroy task result. for ex in self.task_ex.action_executions: if hasattr(ex, 'output'): ex.output = {} self._after_task_complete() # Ignore DELAYED state. if self.task_ex.state == states.RUNNING_DELAYED: return # If workflow is paused we shouldn't schedule new commands # and mark task as processed. if states.is_paused(self.wf_ex.state): return wf_ctrl = wf_base.get_controller(self.wf_ex, self.wf_spec) # Calculate commands to process next. cmds = wf_ctrl.continue_workflow(task_ex=self.task_ex) # Mark task as processed after all decisions have been made # upon its completion. self.task_ex.processed = True dispatcher.dispatch_workflow_commands(self.wf_ex, cmds) @profiler.trace('task-update') def update(self, state, state_info=None): """Update task and set specified state. Method sets specified task state. :param state: New task state. :param state_info: New state information (i.e. error message). """ assert self.task_ex # Ignore if task already completed. if states.is_completed(self.task_ex.state): return # Update only if state transition is valid. if not states.is_valid_transition(self.task_ex.state, state): return # We can't set the task state to RUNNING if some other # child executions are paused. child_states = [a_ex.state for a_ex in self.task_ex.executions] if state == states.RUNNING and states.PAUSED in child_states: return self.set_state(state, state_info) def _before_task_start(self): policies_spec = self.task_spec.get_policies() for p in policies.build_policies(policies_spec, self.wf_spec): p.before_task_start(self.task_ex, self.task_spec) def _after_task_complete(self): policies_spec = self.task_spec.get_policies() for p in policies.build_policies(policies_spec, self.wf_spec): p.after_task_complete(self.task_ex, self.task_spec) @profiler.trace('task-create-task-execution') def _create_task_execution(self, state=states.RUNNING, state_info=None): task_id = utils.generate_unicode_uuid() task_name = self.task_spec.get_name() task_type = self.task_spec.get_type() data_flow.add_current_task_to_context(self.ctx, task_id, task_name) values = { 'id': task_id, 'name': task_name, 'workflow_execution_id': self.wf_ex.id, 'workflow_name': self.wf_ex.workflow_name, 'workflow_namespace': self.wf_ex.workflow_namespace, 'workflow_id': self.wf_ex.workflow_id, 'state': state, 'state_info': state_info, 'spec': self.task_spec.to_dict(), 'unique_key': self.unique_key, 'in_context': self.ctx, 'published': {}, 'runtime_context': {}, 'project_id': self.wf_ex.project_id, 'type': task_type } if self.triggered_by: values['runtime_context']['triggered_by'] = self.triggered_by self.task_ex = db_api.create_task_execution(values) self.created = True def _get_action_defaults(self): action_name = self.task_spec.get_action_name() if not action_name: return {} env = self.wf_ex.context.get('__env', {}) return env.get('__actions', {}).get(action_name, {}) class RegularTask(Task): """Regular task. Takes care of processing regular tasks with one action. """ @profiler.trace('regular-task-on-action-complete', hide_args=True) def on_action_complete(self, action_ex): state = action_ex.state # TODO(rakhmerov): Here we can define more informative messages # cases when action is successful and when it's not. For example, # in state_info we can specify the cause action. state_info = (None if state == states.SUCCESS else action_ex.output.get('result')) self.complete(state, state_info) @profiler.trace('regular-task-on-action-update', hide_args=True) def on_action_update(self, action_ex): self.update(action_ex.state) @profiler.trace('task-run') def run(self): if not self.task_ex: self._run_new() else: self._run_existing() @profiler.trace('task-run-new') def _run_new(self): if self.waiting: self.defer() return self._create_task_execution() LOG.debug( 'Starting task [workflow=%s, task=%s, init_state=%s]', self.wf_ex.name, self.task_spec.get_name(), self.task_ex.state ) self._before_task_start() # Policies could possibly change task state. if self.task_ex.state != states.RUNNING: return self._schedule_actions() @profiler.trace('task-run-existing') def _run_existing(self): if self.waiting: return # Explicitly change task state to RUNNING. # Throw exception if the existing task already succeeded. if self.task_ex.state == states.SUCCESS: raise exc.MistralError( 'Rerunning succeeded tasks is not supported.' ) self.set_state(states.RUNNING, None, processed=False) self._update_inbound_context() self._update_triggered_by() self._reset_actions() self._schedule_actions() def _update_inbound_context(self): task_ex = self.task_ex assert task_ex wf_ctrl = wf_base.get_controller(self.wf_ex, self.wf_spec) self.ctx = wf_ctrl.get_task_inbound_context(self.task_spec) data_flow.add_current_task_to_context(self.ctx, task_ex.id, task_ex.name) utils.update_dict(task_ex.in_context, self.ctx) def _update_triggered_by(self): assert self.task_ex if not self.triggered_by: return self.task_ex.runtime_context['triggered_by'] = self.triggered_by def _reset_actions(self): """Resets task state. Depending on task type this method may reset task state. For example, delete all task actions etc. """ # Reset state of processed task and related action executions. if self.reset_flag: execs = self.task_ex.executions else: execs = [e for e in self.task_ex.executions if (e.accepted and e.state in [states.ERROR, states.CANCELLED])] for ex in execs: ex.accepted = False def _schedule_actions(self): # Regular task schedules just one action. input_dict = self._get_action_input() target = self._get_target(input_dict) action = self._build_action() action.validate_input(input_dict) action.schedule( input_dict, target, safe_rerun=self.task_spec.get_safe_rerun(), timeout=self._get_timeout() ) @profiler.trace('regular-task-get-target', hide_args=True) def _get_target(self, input_dict): ctx_view = data_flow.ContextView( input_dict, self.ctx, self.wf_ex.context, self.wf_ex.input ) return expr.evaluate_recursively( self.task_spec.get_target(), ctx_view ) @profiler.trace('regular-task-get-action-input', hide_args=True) def _get_action_input(self, ctx=None): input_dict = self._evaluate_expression(self.task_spec.get_input(), ctx) if not isinstance(input_dict, dict): raise exc.InputException( "Wrong dynamic input for task: %s. Dict type is expected. " "Actual type: %s. Actual value: %s" % (self.task_spec.get_name(), type(input_dict), str(input_dict)) ) return utils.merge_dicts( input_dict, self._get_action_defaults(), overwrite=False ) def _evaluate_expression(self, expression, ctx=None): ctx = ctx or self.ctx ctx_view = data_flow.ContextView( ctx, self.wf_ex.context, self.wf_ex.input ) input_dict = expr.evaluate_recursively( expression, ctx_view ) return input_dict def _build_action(self): action_name = self.task_spec.get_action_name() wf_name = self.task_spec.get_workflow_name() # For dynamic workflow evaluation we regenerate the action. if wf_name: return actions.WorkflowAction( wf_name=self._evaluate_expression(wf_name), task_ex=self.task_ex ) # For dynamic action evaluation we just regenerate the name. if action_name: action_name = self._evaluate_expression(action_name) if not action_name: action_name = 'std.noop' action_def = actions.resolve_action_definition( action_name, self.wf_ex.name, self.wf_spec.get_name() ) if action_def.spec: return actions.AdHocAction(action_def, task_ex=self.task_ex, task_ctx=self.ctx, wf_ctx=self.wf_ex.context) return actions.PythonAction(action_def, task_ex=self.task_ex) def _get_timeout(self): timeout = self.task_spec.get_policies().get_timeout() if not isinstance(timeout, (int, float)): wf_ex = self.task_ex.workflow_execution ctx_view = data_flow.ContextView( self.task_ex.in_context, wf_ex.context, wf_ex.input ) timeout = expr.evaluate_recursively(data=timeout, context=ctx_view) return timeout if timeout > 0 else None class WithItemsTask(RegularTask): """With-items task. Takes care of processing "with-items" tasks. """ _CONCURRENCY = 'concurrency' _CAPACITY = 'capacity' _COUNT = 'count' _WITH_ITEMS = 'with_items' _DEFAULT_WITH_ITEMS = { _COUNT: 0, _CONCURRENCY: 0, _CAPACITY: 0 } @profiler.trace('with-items-task-on-action-complete', hide_args=True) def on_action_complete(self, action_ex): assert self.task_ex with db_api.named_lock('with-items-%s' % self.task_ex.id): # NOTE: We need to refresh task execution object right # after the lock is acquired to make sure that we're # working with a fresh state of its runtime context. # Otherwise, SQLAlchemy session can contain a stale # cached version of it so that we don't modify actual # values (i.e. capacity). db_api.refresh(self.task_ex) if self.is_completed(): return self._increase_capacity() if self.is_with_items_completed(): state = self._get_final_state() # TODO(rakhmerov): Here we can define more informative messages # in cases when action is successful and when it's not. # For example, in state_info we can specify the cause action. # The use of action_ex.output.get('result') for state_info is # not accurate because there could be action executions that # had failed or was cancelled prior to this action execution. state_info = { states.SUCCESS: None, states.ERROR: 'One or more actions had failed.', states.CANCELLED: 'One or more actions was cancelled.' } self.complete(state, state_info[state]) return if self._has_more_iterations() and self._get_concurrency(): self._schedule_actions() def _schedule_actions(self): with_items_values = self._get_with_items_values() if self._is_new(): self._validate_values(with_items_values) action_count = len(six.next(iter(with_items_values.values()))) self._prepare_runtime_context(action_count) input_dicts = self._get_input_dicts(with_items_values) if not input_dicts: self.complete(states.SUCCESS) return for i, input_dict in input_dicts: target = self._get_target(input_dict) action = self._build_action() action.validate_input(input_dict) action.schedule( input_dict, target, index=i, safe_rerun=self.task_spec.get_safe_rerun(), timeout=self._get_timeout() ) self._decrease_capacity(1) def _get_with_items_values(self): """Returns all values evaluated from 'with-items' expression. Example: DSL: with-items: - var1 in <% $.arrayI %> - var2 in <% $.arrayJ %> where arrayI = [1,2,3] and arrayJ = [a,b,c] The result of the method in this case will be: { 'var1': [1,2,3], 'var2': [a,b,c] } :return: Evaluated 'with-items' expression values. """ ctx_view = data_flow.ContextView( self.ctx, self.wf_ex.context, self.wf_ex.input ) return expr.evaluate_recursively( self.task_spec.get_with_items(), ctx_view ) def _validate_values(self, with_items_values): # Take only mapped values and check them. values = list(with_items_values.values()) if not all([isinstance(v, list) for v in values]): raise exc.InputException( "Wrong input format for: %s. List type is" " expected for each value." % with_items_values ) required_len = len(values[0]) if not all(len(v) == required_len for v in values): raise exc.InputException( "Wrong input format for: %s. All arrays must" " have the same length." % with_items_values ) def _get_input_dicts(self, with_items_values): """Calculate input dictionaries for another portion of actions. :return: a list of tuples containing indexes and corresponding input dicts. """ result = [] for i in self._get_next_indexes(): ctx = {} for k, v in with_items_values.items(): ctx.update({k: v[i]}) ctx = utils.merge_dicts(ctx, self.ctx) result.append((i, self._get_action_input(ctx))) return result def _get_with_items_context(self): return self.task_ex.runtime_context.get( self._WITH_ITEMS, self._DEFAULT_WITH_ITEMS ) def _get_with_items_count(self): return self._get_with_items_context()[self._COUNT] def _get_with_items_capacity(self): return self._get_with_items_context()[self._CAPACITY] def _get_concurrency(self): return self.task_ex.runtime_context.get(self._CONCURRENCY) def is_with_items_completed(self): find_cancelled = lambda x: x.accepted and x.state == states.CANCELLED if list(filter(find_cancelled, self.task_ex.executions)): return True execs = list([t for t in self.task_ex.executions if t.accepted]) count = self._get_with_items_count() or 1 # We need to make sure that method on_action_complete() has been # called for every action. Just looking at number of actions and # their 'accepted' flag is not enough because action gets accepted # before on_action_complete() is called for it. This call is # mandatory in order to do all needed processing from task # perspective. So we can simply check if capacity is fully reset # to its initial state. full_capacity = ( not self._get_concurrency() or self._get_with_items_capacity() == self._get_concurrency() ) return count == len(execs) and full_capacity def _get_final_state(self): find_cancelled = lambda x: x.accepted and x.state == states.CANCELLED find_error = lambda x: x.accepted and x.state == states.ERROR if list(filter(find_cancelled, self.task_ex.executions)): return states.CANCELLED elif list(filter(find_error, self.task_ex.executions)): return states.ERROR else: return states.SUCCESS def _get_accepted_executions(self): # Choose only if not accepted but completed. return list( [x for x in self.task_ex.executions if x.accepted and states.is_completed(x.state)] ) def _get_unaccepted_executions(self): # Choose only if not accepted but completed. return list( filter( lambda x: not x.accepted and states.is_completed(x.state), self.task_ex.executions ) ) def _get_next_start_index(self): f = lambda x: ( x.accepted or states.is_running(x.state) or states.is_idle(x.state) ) return len(list(filter(f, self.task_ex.executions))) def _get_next_indexes(self): capacity = self._get_with_items_capacity() count = self._get_with_items_count() def _get_indexes(exs): return sorted(set([ex.runtime_context['index'] for ex in exs])) accepted = _get_indexes(self._get_accepted_executions()) unaccepted = _get_indexes(self._get_unaccepted_executions()) candidates = sorted(list(set(unaccepted) - set(accepted))) if candidates: indices = copy.copy(candidates) if max(candidates) < count - 1: indices += list(six.moves.range(max(candidates) + 1, count)) else: i = self._get_next_start_index() indices = list(six.moves.range(i, count)) return indices[:capacity] def _increase_capacity(self): ctx = self._get_with_items_context() concurrency = self._get_concurrency() if concurrency and ctx[self._CAPACITY] < concurrency: ctx[self._CAPACITY] += 1 self.task_ex.runtime_context.update({self._WITH_ITEMS: ctx}) def _decrease_capacity(self, count): ctx = self._get_with_items_context() capacity = ctx[self._CAPACITY] if capacity is not None: if capacity >= count: ctx[self._CAPACITY] -= count else: raise RuntimeError( "Can't decrease with-items capacity" " [capacity=%s, count=%s]" % (capacity, count) ) self.task_ex.runtime_context.update({self._WITH_ITEMS: ctx}) def _is_new(self): return not self.task_ex.runtime_context.get(self._WITH_ITEMS) def _prepare_runtime_context(self, action_count): runtime_ctx = self.task_ex.runtime_context if not runtime_ctx.get(self._WITH_ITEMS): # Prepare current indexes and parallel limitation. runtime_ctx[self._WITH_ITEMS] = { self._CAPACITY: self._get_concurrency(), self._COUNT: action_count } def _has_more_iterations(self): # See action executions which have been already # accepted or are still running. action_exs = list(filter( lambda x: x.accepted or x.state == states.RUNNING, self.task_ex.executions )) return self._get_with_items_count() > len(action_exs) mistral-6.0.0/mistral/engine/__init__.py0000666000175100017510000000152313245513261020252 0ustar zuulzuul00000000000000# Copyright 2015 - Huawei Technologies Co. Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import importutils # NOTE(xylan): import modules for WorkflowHandler subclasses iteration importutils.import_module('mistral.workflow.direct_workflow') importutils.import_module('mistral.workflow.reverse_workflow') mistral-6.0.0/mistral/engine/workflows.py0000666000175100017510000004417313245513272020562 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc from oslo_config import cfg from oslo_log import log as logging from osprofiler import profiler import six from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models as db_models from mistral.engine import action_queue from mistral.engine import dispatcher from mistral.engine import utils as engine_utils from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import triggers from mistral.services import workflows as wf_service from mistral import utils from mistral.utils import merge_dicts from mistral.utils import wf_trace from mistral.workflow import base as wf_base from mistral.workflow import commands from mistral.workflow import data_flow from mistral.workflow import lookup_utils from mistral.workflow import states from mistral_lib import actions as ml_actions LOG = logging.getLogger(__name__) @six.add_metaclass(abc.ABCMeta) class Workflow(object): """Workflow. Represents a workflow and defines interface that can be used by Mistral engine or its components in order to manipulate with workflows. """ def __init__(self, wf_ex=None): self.wf_ex = wf_ex if wf_ex: # We're processing a workflow that's already in progress. self.wf_spec = spec_parser.get_workflow_spec_by_execution_id( wf_ex.id ) else: self.wf_spec = None @profiler.trace('workflow-start') def start(self, wf_def, wf_ex_id, input_dict, desc='', params=None): """Start workflow. :param wf_def: Workflow definition. :param wf_ex_id: Workflow execution id. :param input_dict: Workflow input. :param desc: Workflow execution description. :param params: Workflow type specific parameters. :raises """ assert not self.wf_ex # New workflow execution. self.wf_spec = spec_parser.get_workflow_spec_by_definition_id( wf_def.id, wf_def.updated_at ) wf_trace.info( self.wf_ex, 'Starting workflow [name=%s, input=%s]' % (wf_def.name, utils.cut(input_dict)) ) self.validate_input(input_dict) self._create_execution( wf_def, wf_ex_id, self.prepare_input(input_dict), desc, params ) self.set_state(states.RUNNING) wf_ctrl = wf_base.get_controller(self.wf_ex, self.wf_spec) dispatcher.dispatch_workflow_commands( self.wf_ex, wf_ctrl.continue_workflow() ) def stop(self, state, msg=None): """Stop workflow. :param state: New workflow state. :param msg: Additional explaining message. """ assert self.wf_ex if state == states.SUCCESS: self._succeed_workflow(self._get_final_context(), msg) elif state == states.ERROR: self._fail_workflow(self._get_final_context(), msg) elif state == states.CANCELLED: self._cancel_workflow(msg) def pause(self, msg=None): """Pause workflow. :param msg: Additional explaining message. """ assert self.wf_ex if states.is_paused(self.wf_ex.state): return # Set the state of this workflow to paused. self.set_state(states.PAUSED, state_info=msg) # If workflow execution is a subworkflow, # schedule update to the task execution. if self.wf_ex.task_execution_id: # Import the task_handler module here to avoid circular reference. from mistral.engine import task_handler task_handler.schedule_on_action_update(self.wf_ex) return def resume(self, env=None): """Resume workflow. :param env: Environment. """ assert self.wf_ex wf_service.update_workflow_execution_env(self.wf_ex, env) self.set_state(states.RUNNING) wf_ctrl = wf_base.get_controller(self.wf_ex) # Calculate commands to process next. cmds = wf_ctrl.continue_workflow() self._continue_workflow(cmds) # If workflow execution is a subworkflow, # schedule update to the task execution. if self.wf_ex.task_execution_id: # Import the task_handler module here to avoid circular reference. from mistral.engine import task_handler task_handler.schedule_on_action_update(self.wf_ex) def prepare_input(self, input_dict): for k, v in self.wf_spec.get_input().items(): if k not in input_dict or input_dict[k] is utils.NotDefined: input_dict[k] = v return input_dict def validate_input(self, input_dict): engine_utils.validate_input( self.wf_spec.get_input(), input_dict, self.wf_spec.get_name(), self.wf_spec.__class__.__name__ ) def rerun(self, task_ex, reset=True, env=None): """Rerun workflow from the given task. :param task_ex: Task execution that the workflow needs to rerun from. :param reset: If True, reset task state including deleting its action executions. :param env: Environment. """ assert self.wf_ex # Since some lookup utils functions may use cache for completed tasks # we need to clean caches to make sure that stale objects can't be # retrieved. lookup_utils.clear_caches() wf_service.update_workflow_execution_env(self.wf_ex, env) self.set_state(states.RUNNING, recursive=True) wf_ctrl = wf_base.get_controller(self.wf_ex) # Calculate commands to process next. cmds = wf_ctrl.rerun_tasks([task_ex], reset=reset) self._continue_workflow(cmds) def _continue_workflow(self, cmds): # When resuming a workflow we need to ignore all 'pause' # commands because workflow controller takes tasks that # completed within the period when the workflow was paused. cmds = list( [c for c in cmds if not isinstance(c, commands.PauseWorkflow)] ) # Since there's no explicit task causing the operation # we need to mark all not processed tasks as processed # because workflow controller takes only completed tasks # with flag 'processed' equal to False. for t_ex in self.wf_ex.task_executions: if states.is_completed(t_ex.state) and not t_ex.processed: t_ex.processed = True if cmds: dispatcher.dispatch_workflow_commands(self.wf_ex, cmds) else: self.check_and_complete() @profiler.trace('workflow-lock') def lock(self): assert self.wf_ex return db_api.acquire_lock(db_models.WorkflowExecution, self.wf_ex.id) def _get_final_context(self): wf_ctrl = wf_base.get_controller(self.wf_ex) final_context = {} try: final_context = wf_ctrl.evaluate_workflow_final_context() except Exception as e: LOG.warning( 'Failed to get final context for workflow execution. ' '[wf_ex_id: %s, wf_name: %s, error: %s]', self.wf_ex.id, self.wf_ex.name, str(e) ) return final_context def _create_execution(self, wf_def, wf_ex_id, input_dict, desc, params): self.wf_ex = db_api.create_workflow_execution({ 'id': wf_ex_id, 'name': wf_def.name, 'description': desc, 'workflow_name': wf_def.name, 'workflow_namespace': wf_def.namespace, 'workflow_id': wf_def.id, 'spec': self.wf_spec.to_dict(), 'state': states.IDLE, 'output': {}, 'task_execution_id': params.get('task_execution_id'), 'root_execution_id': params.get('root_execution_id'), 'runtime_context': { 'index': params.get('index', 0) }, }) self.wf_ex.input = input_dict or {} env = _get_environment(params) if env: params['env'] = env self.wf_ex.params = params data_flow.add_openstack_data_to_context(self.wf_ex) data_flow.add_execution_to_context(self.wf_ex) data_flow.add_environment_to_context(self.wf_ex) data_flow.add_workflow_variables_to_context(self.wf_ex, self.wf_spec) spec_parser.cache_workflow_spec_by_execution_id( self.wf_ex.id, self.wf_spec ) @profiler.trace('workflow-set-state') def set_state(self, state, state_info=None, recursive=False): assert self.wf_ex cur_state = self.wf_ex.state if states.is_valid_transition(cur_state, state): wf_ex = db_api.update_workflow_execution_state( id=self.wf_ex.id, cur_state=cur_state, state=state ) if wf_ex is None: # Do nothing because the state was updated previously. return self.wf_ex = wf_ex self.wf_ex.state_info = state_info wf_trace.info( self.wf_ex, "Workflow '%s' [%s -> %s, msg=%s]" % (self.wf_ex.workflow_name, cur_state, state, state_info) ) else: msg = ("Can't change workflow execution state from %s to %s. " "[workflow=%s, execution_id=%s]" % (cur_state, state, self.wf_ex.name, self.wf_ex.id)) raise exc.WorkflowException(msg) # Workflow result should be accepted by parent workflows (if any) # only if it completed successfully or failed. self.wf_ex.accepted = states.is_completed(state) if states.is_completed(state): # No need to keep task executions of this workflow in the # lookup cache anymore. lookup_utils.invalidate_cached_task_executions(self.wf_ex.id) triggers.on_workflow_complete(self.wf_ex) if recursive and self.wf_ex.task_execution_id: parent_task_ex = db_api.get_task_execution( self.wf_ex.task_execution_id ) parent_wf = Workflow(wf_ex=parent_task_ex.workflow_execution) parent_wf.lock() parent_wf.set_state(state, recursive=recursive) # TODO(rakhmerov): It'd be better to use instance of Task here. parent_task_ex.state = state parent_task_ex.state_info = None parent_task_ex.processed = False @profiler.trace('workflow-check-and-complete') def check_and_complete(self): """Completes the workflow if it needs to be completed. The method simply checks if there are any tasks that are not in a terminal state. If there aren't any then it performs all necessary logic to finalize the workflow (calculate output etc.). :return: Number of incomplete tasks. """ if states.is_paused_or_completed(self.wf_ex.state): return 0 # Workflow is not completed if there are any incomplete task # executions. incomplete_tasks_count = db_api.get_incomplete_task_executions_count( workflow_execution_id=self.wf_ex.id, ) if incomplete_tasks_count > 0: return incomplete_tasks_count wf_ctrl = wf_base.get_controller(self.wf_ex, self.wf_spec) if wf_ctrl.any_cancels(): msg = _build_cancel_info_message(wf_ctrl, self.wf_ex) self._cancel_workflow(msg) elif wf_ctrl.all_errors_handled(): ctx = wf_ctrl.evaluate_workflow_final_context() self._succeed_workflow(ctx) else: msg = _build_fail_info_message(wf_ctrl, self.wf_ex) final_context = wf_ctrl.evaluate_workflow_final_context() self._fail_workflow(final_context, msg) return 0 def _succeed_workflow(self, final_context, msg=None): self.wf_ex.output = data_flow.evaluate_workflow_output( self.wf_ex, self.wf_spec.get_output(), final_context ) # Set workflow execution to success until after output is evaluated. self.set_state(states.SUCCESS, msg) if self.wf_ex.task_execution_id: self._send_result_to_parent_workflow() def _fail_workflow(self, final_context, msg): if states.is_paused_or_completed(self.wf_ex.state): return output_on_error = {} try: output_on_error = data_flow.evaluate_workflow_output( self.wf_ex, self.wf_spec.get_output_on_error(), final_context ) except exc.MistralException as e: msg = ( "Failed to evaluate expression in output-on-error! " "(output-on-error: '%s', exception: '%s' Cause: '%s'" % (self.wf_spec.get_output_on_error(), e, msg) ) LOG.error(msg) self.set_state(states.ERROR, state_info=msg) # When we set an ERROR state we should safely set output value getting # w/o exceptions due to field size limitations. length_output_on_error = len(str(output_on_error).encode("utf-8")) total_output_length = utils.get_number_of_chars_from_kilobytes( cfg.CONF.engine.execution_field_size_limit_kb) if length_output_on_error < total_output_length: msg = utils.cut_by_char( msg, total_output_length - length_output_on_error ) else: msg = utils.cut_by_kb( msg, cfg.CONF.engine.execution_field_size_limit_kb ) self.wf_ex.output = merge_dicts({'result': msg}, output_on_error) if self.wf_ex.task_execution_id: self._send_result_to_parent_workflow() def _cancel_workflow(self, msg): if states.is_completed(self.wf_ex.state): return self.set_state(states.CANCELLED, state_info=msg) # When we set an ERROR state we should safely set output value getting # w/o exceptions due to field size limitations. msg = utils.cut_by_kb( msg, cfg.CONF.engine.execution_field_size_limit_kb ) self.wf_ex.output = {'result': msg} if self.wf_ex.task_execution_id: self._send_result_to_parent_workflow() def _send_result_to_parent_workflow(self): if self.wf_ex.state == states.SUCCESS: result = ml_actions.Result(data=self.wf_ex.output) elif self.wf_ex.state == states.ERROR: err_msg = ( self.wf_ex.state_info or 'Failed subworkflow [execution_id=%s]' % self.wf_ex.id ) result = ml_actions.Result(error=err_msg) elif self.wf_ex.state == states.CANCELLED: err_msg = ( self.wf_ex.state_info or 'Cancelled subworkflow [execution_id=%s]' % self.wf_ex.id ) result = ml_actions.Result(error=err_msg, cancel=True) else: raise RuntimeError( "Method _send_result_to_parent_workflow() must never be called" " if a workflow is not in SUCCESS, ERROR or CANCELLED state." ) action_queue.schedule_on_action_complete( self.wf_ex.id, result, wf_action=True ) def _get_environment(params): env = params.get('env', {}) if isinstance(env, dict): return env if isinstance(env, six.string_types): env_db = db_api.load_environment(env) if not env_db: raise exc.InputException( 'Environment is not found: %s' % env ) return env_db.variables raise exc.InputException( 'Unexpected value type for environment [env=%s, type=%s]' % (env, type(env)) ) def _build_fail_info_message(wf_ctrl, wf_ex): # Try to find where error is exactly. failed_tasks = sorted( filter( lambda t: not wf_ctrl.is_error_handled_for(t), lookup_utils.find_error_task_executions(wf_ex.id) ), key=lambda t: t.name ) msg = ('Failure caused by error in tasks: %s\n' % ', '.join([t.name for t in failed_tasks])) for t in failed_tasks: msg += '\n %s [task_ex_id=%s] -> %s\n' % (t.name, t.id, t.state_info) for i, ex in enumerate(t.action_executions): if ex.state == states.ERROR: output = (ex.output or dict()).get('result', 'Unknown') msg += ( ' [action_ex_id=%s, idx=%s]: %s\n' % ( ex.id, i, str(output) ) ) for i, ex in enumerate(t.workflow_executions): if ex.state == states.ERROR: output = (ex.output or dict()).get('result', 'Unknown') msg += ( ' [wf_ex_id=%s, idx=%s]: %s\n' % ( ex.id, i, str(output) ) ) return msg def _build_cancel_info_message(wf_ctrl, wf_ex): # Try to find where cancel is exactly. cancelled_tasks = sorted( lookup_utils.find_cancelled_task_executions(wf_ex.id), key=lambda t: t.name ) return ( 'Cancelled tasks: %s' % ', '.join([t.name for t in cancelled_tasks]) ) mistral-6.0.0/mistral/engine/utils.py0000666000175100017510000000573513245513261017664 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - Huawei Technologies Co. Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral import utils def _compare_parameters(expected_input, actual_input): """Compares the expected parameters with the actual parameters. :param expected_input: Expected dict of parameters. :param actual_input: Actual dict of parameters. :return: Tuple {missing parameter names, unexpected parameter names} """ missing_params = [] unexpected_params = copy.deepcopy(list((actual_input or {}).keys())) for p_name, p_value in expected_input.items(): if p_value is utils.NotDefined and p_name not in unexpected_params: missing_params.append(str(p_name)) if p_name in unexpected_params: unexpected_params.remove(p_name) return missing_params, unexpected_params def validate_input(expected_input, actual_input, obj_name, obj_class): actual_input = actual_input or {} missing, unexpected = _compare_parameters( expected_input, actual_input ) if missing or unexpected: msg = 'Invalid input [name=%s, class=%s' msg_props = [obj_name, obj_class] if missing: msg += ', missing=%s' msg_props.append(missing) if unexpected: msg += ', unexpected=%s' msg_props.append(unexpected) msg += ']' raise exc.InputException(msg % tuple(msg_props)) def resolve_workflow_definition(parent_wf_name, parent_wf_spec_name, namespace, wf_spec_name): wf_def = None if parent_wf_name != parent_wf_spec_name: # If parent workflow belongs to a workbook then # check child workflow within the same workbook # (to be able to use short names within workbooks). # If it doesn't exist then use a name from spec # to find a workflow in DB. wb_name = parent_wf_name.rstrip(parent_wf_spec_name)[:-1] wf_full_name = "%s.%s" % (wb_name, wf_spec_name) wf_def = db_api.load_workflow_definition(wf_full_name, namespace) if not wf_def: wf_def = db_api.load_workflow_definition(wf_spec_name, namespace) if not wf_def: raise exc.WorkflowException( "Failed to find workflow [name=%s] [namespace=%s]" % (wf_spec_name, namespace) ) return wf_def mistral-6.0.0/mistral/engine/action_handler.py0000666000175100017510000000712413245513261021470 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from osprofiler import profiler import traceback as tb from mistral.db.v2.sqlalchemy import models from mistral.engine import actions from mistral.engine import task_handler from mistral import exceptions as exc from mistral.lang import parser as spec_parser LOG = logging.getLogger(__name__) @profiler.trace('action-handler-on-action-complete', hide_args=True) def on_action_complete(action_ex, result): task_ex = action_ex.task_execution action = _build_action(action_ex) try: action.complete(result) except exc.MistralException as e: msg = ( "Failed to complete action [error=%s, action=%s, task=%s]:\n%s" % (e, action_ex.name, task_ex.name, tb.format_exc()) ) LOG.error(msg) action.fail(msg) if task_ex: task_handler.force_fail_task(task_ex, msg) return if task_ex: task_handler.schedule_on_action_complete(action_ex) @profiler.trace('action-handler-on-action-update', hide_args=True) def on_action_update(action_ex, state): task_ex = action_ex.task_execution action = _build_action(action_ex) try: action.update(state) except exc.MistralException as e: # If the update of the action execution fails, do not fail # the action execution. Log the exception and re-raise the # exception. msg = ( "Failed to update action [error=%s, action=%s, task=%s]:\n%s" % (e, action_ex.name, task_ex.name, tb.format_exc()) ) LOG.error(msg) raise if task_ex: task_handler.schedule_on_action_update(action_ex) @profiler.trace('action-handler-build-action', hide_args=True) def _build_action(action_ex): if isinstance(action_ex, models.WorkflowExecution): return actions.WorkflowAction(wf_name=action_ex.name, action_ex=action_ex) wf_name = None wf_spec_name = None if action_ex.workflow_name: wf_name = action_ex.workflow_name wf_spec = spec_parser.get_workflow_spec_by_execution_id( action_ex.task_execution.workflow_execution_id ) wf_spec_name = wf_spec.get_name() adhoc_action_name = action_ex.runtime_context.get('adhoc_action_name') if adhoc_action_name: action_def = actions.resolve_action_definition( adhoc_action_name, wf_name, wf_spec_name ) return actions.AdHocAction(action_def, action_ex=action_ex) action_def = actions.resolve_action_definition( action_ex.name, wf_name, wf_spec_name ) return actions.PythonAction(action_def, action_ex=action_ex) def build_action_by_name(action_name): action_def = actions.resolve_action_definition(action_name) action_cls = (actions.PythonAction if not action_def.spec else actions.AdHocAction) return action_cls(action_def) mistral-6.0.0/mistral/engine/default_engine.py0000666000175100017510000001525113245513272021471 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from osprofiler import profiler from mistral.db import utils as db_utils from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models as db_models from mistral.engine import action_handler from mistral.engine import action_queue from mistral.engine import base from mistral.engine import workflow_handler as wf_handler from mistral import exceptions from mistral import utils as u from mistral.workflow import states # Submodules of mistral.engine will throw NoSuchOptError if configuration # options required at top level of this __init__.py are not imported before # the submodules are referenced. class DefaultEngine(base.Engine): @db_utils.retry_on_db_error @action_queue.process @profiler.trace('engine-start-workflow', hide_args=True) def start_workflow(self, wf_identifier, wf_namespace='', wf_ex_id=None, wf_input=None, description='', **params): if wf_namespace: params['namespace'] = wf_namespace try: with db_api.transaction(): wf_ex = wf_handler.start_workflow( wf_identifier, wf_namespace, wf_ex_id, wf_input or {}, description, params ) return wf_ex.get_clone() except exceptions.DBDuplicateEntryError: # NOTE(akovi): the workflow execution with a provided # wf_ex_id may already exist. In this case, simply # return the existing entity. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex_id) return wf_ex.get_clone() @db_utils.retry_on_db_error @action_queue.process def start_action(self, action_name, action_input, description=None, **params): with db_api.transaction(): action = action_handler.build_action_by_name(action_name) action.validate_input(action_input) sync = params.get('run_sync') save = params.get('save_result') target = params.get('target') timeout = params.get('timeout') is_action_sync = action.is_sync(action_input) if sync and not is_action_sync: raise exceptions.InputException( "Action does not support synchronous execution.") if not sync and (save or not is_action_sync): action.schedule(action_input, target, timeout=timeout) return action.action_ex.get_clone() output = action.run(action_input, target, save=False, timeout=timeout) state = states.SUCCESS if output.is_success() else states.ERROR if not save: # Action execution is not created but we need to return similar # object to the client anyway. return db_models.ActionExecution( name=action_name, description=description, input=action_input, output=output.to_dict(), state=state ) action_ex_id = u.generate_unicode_uuid() values = { 'id': action_ex_id, 'name': action_name, 'description': description, 'input': action_input, 'output': output.to_dict(), 'state': state, } return db_api.create_action_execution(values) @db_utils.retry_on_db_error @action_queue.process @profiler.trace('engine-on-action-complete', hide_args=True) def on_action_complete(self, action_ex_id, result, wf_action=False, async_=False): with db_api.transaction(): if wf_action: action_ex = db_api.get_workflow_execution(action_ex_id) else: action_ex = db_api.get_action_execution(action_ex_id) action_handler.on_action_complete(action_ex, result) return action_ex.get_clone() @db_utils.retry_on_db_error @action_queue.process @profiler.trace('engine-on-action-update', hide_args=True) def on_action_update(self, action_ex_id, state, wf_action=False, async_=False): with db_api.transaction(): if wf_action: action_ex = db_api.get_workflow_execution(action_ex_id) else: action_ex = db_api.get_action_execution(action_ex_id) action_handler.on_action_update(action_ex, state) return action_ex.get_clone() @db_utils.retry_on_db_error @action_queue.process def pause_workflow(self, wf_ex_id): with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex_id) wf_handler.pause_workflow(wf_ex) return wf_ex.get_clone() @db_utils.retry_on_db_error @action_queue.process def rerun_workflow(self, task_ex_id, reset=True, env=None): with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex_id) wf_ex = task_ex.workflow_execution wf_handler.rerun_workflow(wf_ex, task_ex, reset=reset, env=env) return wf_ex.get_clone() @db_utils.retry_on_db_error @action_queue.process def resume_workflow(self, wf_ex_id, env=None): with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex_id) wf_handler.resume_workflow(wf_ex, env=env) return wf_ex.get_clone() @db_utils.retry_on_db_error @action_queue.process def stop_workflow(self, wf_ex_id, state, message=None): with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex_id) wf_handler.stop_workflow(wf_ex, state, message) return wf_ex.get_clone() def rollback_workflow(self, wf_ex_id): # TODO(rakhmerov): Implement. raise NotImplementedError mistral-6.0.0/mistral/engine/task_handler.py0000666000175100017510000003511013245513261021151 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Nokia Networks. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from osprofiler import profiler import traceback as tb from mistral.db import utils as db_utils from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral.engine import action_queue from mistral.engine import tasks from mistral.engine import workflow_handler as wf_handler from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import scheduler from mistral.workflow import base as wf_base from mistral.workflow import commands as wf_cmds from mistral.workflow import states """Responsible for running tasks and handling results.""" LOG = logging.getLogger(__name__) _REFRESH_TASK_STATE_PATH = ( 'mistral.engine.task_handler._refresh_task_state' ) _SCHEDULED_ON_ACTION_COMPLETE_PATH = ( 'mistral.engine.task_handler._scheduled_on_action_complete' ) _SCHEDULED_ON_ACTION_UPDATE_PATH = ( 'mistral.engine.task_handler._scheduled_on_action_update' ) @profiler.trace('task-handler-run-task', hide_args=True) def run_task(wf_cmd): """Runs workflow task. :param wf_cmd: Workflow command. """ task = _build_task_from_command(wf_cmd) try: task.run() except exc.MistralException as e: wf_ex = wf_cmd.wf_ex task_spec = wf_cmd.task_spec msg = ( "Failed to run task [error=%s, wf=%s, task=%s]:\n%s" % (e, wf_ex.name, task_spec.get_name(), tb.format_exc()) ) LOG.error(msg) task.set_state(states.ERROR, msg) wf_handler.force_fail_workflow(wf_ex, msg) return if task.is_waiting() and (task.is_created() or task.is_state_changed()): _schedule_refresh_task_state(task.task_ex, 1) @profiler.trace('task-handler-on-action-complete', hide_args=True) def _on_action_complete(action_ex): """Handles action completion event. :param action_ex: Action execution. """ task_ex = action_ex.task_execution if not task_ex: return task_spec = spec_parser.get_task_spec(task_ex.spec) wf_ex = task_ex.workflow_execution task = _create_task( wf_ex, spec_parser.get_workflow_spec_by_execution_id(wf_ex.id), task_spec, task_ex.in_context, task_ex ) try: task.on_action_complete(action_ex) except exc.MistralException as e: wf_ex = task_ex.workflow_execution msg = ("Failed to handle action completion [error=%s, wf=%s, task=%s," " action=%s]:\n%s" % (e, wf_ex.name, task_ex.name, action_ex.name, tb.format_exc())) LOG.error(msg) task.set_state(states.ERROR, msg) wf_handler.force_fail_workflow(wf_ex, msg) @profiler.trace('task-handler-on-action-update', hide_args=True) def _on_action_update(action_ex): """Handles action update event. :param action_ex: Action execution. """ task_ex = action_ex.task_execution if not task_ex: return task_spec = spec_parser.get_task_spec(task_ex.spec) wf_ex = task_ex.workflow_execution task = _create_task( wf_ex, spec_parser.get_workflow_spec_by_execution_id(wf_ex.id), task_spec, task_ex.in_context, task_ex ) try: task.on_action_update(action_ex) if states.is_paused(action_ex.state): wf_handler.pause_workflow(wf_ex) if states.is_running(action_ex.state): # If any subworkflow of the parent workflow is paused, # then keep the parent workflow execution paused. for task_ex in wf_ex.task_executions: if states.is_paused(task_ex.state): return # Otherwise if no other subworkflow is paused, # then resume the parent workflow execution. wf_handler.resume_workflow(wf_ex) except exc.MistralException as e: wf_ex = task_ex.workflow_execution msg = ("Failed to handle action update [error=%s, wf=%s, task=%s," " action=%s]:\n%s" % (e, wf_ex.name, task_ex.name, action_ex.name, tb.format_exc())) LOG.error(msg) task.set_state(states.ERROR, msg) wf_handler.force_fail_workflow(wf_ex, msg) return def force_fail_task(task_ex, msg): """Forces the given task to fail. This method implements the 'forced' task fail without giving a chance to a workflow controller to handle the error. Its main purpose is to reflect errors caused by workflow structure (errors 'publish', 'on-xxx' clauses etc.) rather than failed actions. If such an error happens we should also force the entire workflow to fail. I.e., this kind of error must be propagated to a higher level, to the workflow. :param task_ex: Task execution. :param msg: Error message. """ wf_spec = spec_parser.get_workflow_spec_by_execution_id( task_ex.workflow_execution_id ) task = _build_task_from_execution(wf_spec, task_ex) task.set_state(states.ERROR, msg) wf_handler.force_fail_workflow(task_ex.workflow_execution, msg) def continue_task(task_ex): wf_spec = spec_parser.get_workflow_spec_by_execution_id( task_ex.workflow_execution_id ) task = _build_task_from_execution(wf_spec, task_ex) try: task.set_state(states.RUNNING, None) task.run() except exc.MistralException as e: wf_ex = task_ex.workflow_execution msg = ( "Failed to run task [error=%s, wf=%s, task=%s]:\n%s" % (e, wf_ex.name, task_ex.name, tb.format_exc()) ) LOG.error(msg) task.set_state(states.ERROR, msg) wf_handler.force_fail_workflow(wf_ex, msg) return def complete_task(task_ex, state, state_info): wf_spec = spec_parser.get_workflow_spec_by_execution_id( task_ex.workflow_execution_id ) task = _build_task_from_execution(wf_spec, task_ex) try: task.complete(state, state_info) except exc.MistralException as e: wf_ex = task_ex.workflow_execution msg = ( "Failed to complete task [error=%s, wf=%s, task=%s]:\n%s" % (e, wf_ex.name, task_ex.name, tb.format_exc()) ) LOG.error(msg) task.set_state(states.ERROR, msg) wf_handler.force_fail_workflow(wf_ex, msg) return def _build_task_from_execution(wf_spec, task_ex): return _create_task( task_ex.workflow_execution, wf_spec, wf_spec.get_task(task_ex.name), task_ex.in_context, task_ex ) @profiler.trace('task-handler-build-task-from-command', hide_args=True) def _build_task_from_command(cmd): if isinstance(cmd, wf_cmds.RunExistingTask): task = _create_task( cmd.wf_ex, cmd.wf_spec, spec_parser.get_task_spec(cmd.task_ex.spec), cmd.ctx, task_ex=cmd.task_ex, unique_key=cmd.task_ex.unique_key, waiting=cmd.task_ex.state == states.WAITING, triggered_by=cmd.triggered_by ) if cmd.reset: task.reset() return task if isinstance(cmd, wf_cmds.RunTask): task = _create_task( cmd.wf_ex, cmd.wf_spec, cmd.task_spec, cmd.ctx, unique_key=cmd.unique_key, waiting=cmd.is_waiting(), triggered_by=cmd.triggered_by ) return task raise exc.MistralError('Unsupported workflow command: %s' % cmd) def _create_task(wf_ex, wf_spec, task_spec, ctx, task_ex=None, unique_key=None, waiting=False, triggered_by=None): if task_spec.get_with_items(): cls = tasks.WithItemsTask else: cls = tasks.RegularTask return cls( wf_ex, wf_spec, task_spec, ctx, task_ex=task_ex, unique_key=unique_key, waiting=waiting, triggered_by=triggered_by ) @db_utils.retry_on_db_error @action_queue.process @profiler.trace('task-handler-refresh-task-state', hide_args=True) def _refresh_task_state(task_ex_id): with db_api.transaction(): task_ex = db_api.load_task_execution(task_ex_id) if not task_ex: return wf_ex = task_ex.workflow_execution if states.is_completed(wf_ex.state): return wf_spec = spec_parser.get_workflow_spec_by_execution_id( task_ex.workflow_execution_id ) wf_ctrl = wf_base.get_controller(wf_ex, wf_spec) log_state = wf_ctrl.get_logical_task_state( task_ex ) state = log_state.state state_info = log_state.state_info # Update 'triggered_by' because it could have changed. task_ex.runtime_context['triggered_by'] = log_state.triggered_by if state == states.RUNNING: continue_task(task_ex) elif state == states.ERROR: complete_task(task_ex, state, state_info) elif state == states.WAITING: # Let's assume that a task takes 0.01 sec in average to complete # and based on this assumption calculate a time of the next check. # The estimation is very rough, of course, but this delay will be # decreasing as task preconditions will be completing which will # give a decent asymptotic approximation. # For example, if a 'join' task has 100 inbound incomplete tasks # then the next 'refresh_task_state' call will happen in 10 # seconds. For 500 tasks it will be 50 seconds. The larger the # workflow is, the more beneficial this mechanism will be. delay = int(log_state.cardinality * 0.01) _schedule_refresh_task_state(task_ex, max(1, delay)) else: # Must never get here. raise RuntimeError( 'Unexpected logical task state [task_ex_id=%s, task_name=%s, ' 'state=%s]' % (task_ex_id, task_ex.name, state) ) def _schedule_refresh_task_state(task_ex, delay=0): """Schedules task preconditions check. This method provides transactional decoupling of task preconditions check from events that can potentially satisfy those preconditions. It's needed in non-locking model in order to avoid 'phantom read' phenomena when reading state of multiple tasks to see if a task that depends on them can start. Just starting a separate transaction without using scheduler is not safe due to concurrency window that we'll have in this case (time between transactions) whereas scheduler is a special component that is designed to be resistant to failures. :param task_ex: Task execution. :param delay: Delay. """ key = 'th_c_t_s_a-%s' % task_ex.id scheduler.schedule_call( None, _REFRESH_TASK_STATE_PATH, delay, key=key, task_ex_id=task_ex.id ) @db_utils.retry_on_db_error @action_queue.process def _scheduled_on_action_complete(action_ex_id, wf_action): with db_api.transaction(): if wf_action: action_ex = db_api.get_workflow_execution(action_ex_id) else: action_ex = db_api.get_action_execution(action_ex_id) _on_action_complete(action_ex) def schedule_on_action_complete(action_ex, delay=0): """Schedules task completion check. This method provides transactional decoupling of action completion from task completion check. It's needed in non-locking model in order to avoid 'phantom read' phenomena when reading state of multiple actions to see if a task is completed. Just starting a separate transaction without using scheduler is not safe due to concurrency window that we'll have in this case (time between transactions) whereas scheduler is a special component that is designed to be resistant to failures. :param action_ex: Action execution. :param delay: Minimum amount of time before task completion check should be made. """ # Optimization to avoid opening a new transaction if it's not needed. if not action_ex.task_execution.spec.get('with-items'): _on_action_complete(action_ex) return key = 'th_on_a_c-%s' % action_ex.task_execution_id scheduler.schedule_call( None, _SCHEDULED_ON_ACTION_COMPLETE_PATH, delay, key=key, action_ex_id=action_ex.id, wf_action=isinstance(action_ex, models.WorkflowExecution) ) @db_utils.retry_on_db_error @action_queue.process def _scheduled_on_action_update(action_ex_id, wf_action): with db_api.transaction(): if wf_action: action_ex = db_api.get_workflow_execution(action_ex_id) else: action_ex = db_api.get_action_execution(action_ex_id) _on_action_update(action_ex) def schedule_on_action_update(action_ex, delay=0): """Schedules task update check. This method provides transactional decoupling of action update from task update check. It's needed in non-locking model in order to avoid 'phantom read' phenomena when reading state of multiple actions to see if a task is updated. Just starting a separate transaction without using scheduler is not safe due to concurrency window that we'll have in this case (time between transactions) whereas scheduler is a special component that is designed to be resistant to failures. :param action_ex: Action execution. :param delay: Minimum amount of time before task update check should be made. """ # Optimization to avoid opening a new transaction if it's not needed. if not action_ex.task_execution.spec.get('with-items'): _on_action_update(action_ex) return key = 'th_on_a_c-%s' % action_ex.task_execution_id scheduler.schedule_call( None, _SCHEDULED_ON_ACTION_UPDATE_PATH, delay, key=key, action_ex_id=action_ex.id, wf_action=isinstance(action_ex, models.WorkflowExecution) ) mistral-6.0.0/mistral/version.py0000666000175100017510000000132413245513262016733 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from pbr import version version_info = version.VersionInfo('mistral') version_string = version_info.version_string mistral-6.0.0/mistral/ext/0000775000175100017510000000000013245513604015472 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/ext/__init__.py0000666000175100017510000000000013245513261017572 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/ext/pygmentplugin.py0000666000175100017510000000455013245513261020753 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import re from pygments import lexer from pygments import token class MistralLexer(lexer.RegexLexer): name = 'Mistral' aliases = ['mistral'] flags = re.MULTILINE | re.UNICODE tokens = { "root": [ (r'^(\s)*(workflows|tasks|input|output|type)(\s)*:', token.Keyword), (r'^(\s)*(version|name|description)(\s)*:', token.Keyword), (r'^(\s)*(publish|timeout|retry|with\-items)(\s)*:', token.Keyword), (r'^(\s)*(on\-success|on\-error|on\-complete)(\s)*:', token.Keyword), (r'^(\s)*(action|workflow)(\s)*:', token.Keyword, 'call'), (r'(\-|\:)(\s)*(fail|succeed|pause)(\s)+', token.Operator.Word), (r'<%', token.Name.Entity, 'expression'), (r'\{\{', token.Name.Entity, 'expression'), (r'#.*$', token.Comment), (r'(^|\s|\-)+\d+', token.Number), lexer.include("generic"), ], "expression": [ (r'\$', token.Operator), (r'\s(json_pp|task|tasks|execution|env|uuid)(?!\w)', token.Name.Builtin), lexer.include("generic"), (r'%>', token.Name.Entity, '#pop'), (r'}\\}', token.Name.Entity, '#pop'), ], "call": [ (r'(\s)*[\w\.]+($|\s)', token.Name.Function), lexer.default('#pop'), ], "generic": [ (r'%>', token.Name.Entity, '#pop'), (r'}\\}', token.Name.Entity, '#pop'), (r'(\-|:|=|!|\[|\]|<|>|\/|\*)', token.Operator), (r'(null|None|True|False)', token.Name.Builtin), (r'"(\\\\|\\"|[^"])*"', token.String.Double), (r"'(\\\\|\\'|[^'])*'", token.String.Single), (r'\W|\w|\s|\(|\)|,|\.', token.Text), ] } mistral-6.0.0/mistral/utils/0000775000175100017510000000000013245513604016032 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/utils/rpc_utils.py0000666000175100017510000000141513245513262020413 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg CONF = cfg.CONF def get_rpc_backend(transport_url): if transport_url: return transport_url.transport return CONF.rpc_backend mistral-6.0.0/mistral/utils/rest_utils.py0000666000175100017510000002052013245513262020602 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import functools import json from oslo_db import exception as db_exc from oslo_log import log as logging import pecan import six import sqlalchemy as sa import tenacity import webob from wsme import exc as wsme_exc from mistral import context as auth_ctx from mistral.db.v2.sqlalchemy import api as db_api from mistral import exceptions as exc LOG = logging.getLogger(__name__) def wrap_wsme_controller_exception(func): """Decorator for controllers method. This decorator wraps controllers method to manage wsme exceptions: In case of expected error it aborts the request with specific status code. """ @functools.wraps(func) def wrapped(*args, **kwargs): try: return func(*args, **kwargs) except (exc.MistralException, exc.MistralError) as e: pecan.response.translatable_error = e LOG.error('Error during API call: %s', str(e)) raise wsme_exc.ClientSideError( msg=six.text_type(e), status_code=e.http_code ) return wrapped def wrap_pecan_controller_exception(func): """Decorator for controllers method. This decorator wraps controllers method to manage pecan exceptions: In case of expected error it aborts the request with specific status code. """ @functools.wraps(func) def wrapped(*args, **kwargs): try: return func(*args, **kwargs) except (exc.MistralException, exc.MistralError) as e: LOG.error('Error during API call: %s', str(e)) return webob.Response( status=e.http_code, content_type='application/json', body=json.dumps(dict(faultstring=six.text_type(e))), charset='UTF-8' ) return wrapped def validate_query_params(limit, sort_keys, sort_dirs): if limit is not None and limit <= 0: raise wsme_exc.ClientSideError("Limit must be positive.") if len(sort_keys) < len(sort_dirs): raise wsme_exc.ClientSideError( "Length of sort_keys must be equal or greater than sort_dirs." ) if len(sort_keys) > len(sort_dirs): sort_dirs.extend(['asc'] * (len(sort_keys) - len(sort_dirs))) for sort_dir in sort_dirs: if sort_dir not in ['asc', 'desc']: raise wsme_exc.ClientSideError( "Unknown sort direction, must be 'desc' or 'asc'." ) def validate_fields(fields, object_fields): """Check for requested non-existent fields. Check if the user requested non-existent fields. :param fields: A list of fields requested by the user. :param object_fields: A list of fields supported by the object. """ if not fields: return invalid_fields = set(fields) - set(object_fields) if invalid_fields: raise wsme_exc.ClientSideError( 'Field(s) %s are invalid.' % ', '.join(invalid_fields) ) def filters_to_dict(**kwargs): """Return only non-null values :param kwargs: All possible filters :type kwargs: dict :return: Actual filters :rtype: dict """ return {k: v for k, v in kwargs.items() if v is not None} def get_all(list_cls, cls, get_all_function, get_function, resource_function=None, marker=None, limit=None, sort_keys=None, sort_dirs=None, fields=None, all_projects=False, **filters): """Return a list of cls. :param list_cls: REST Resource collection class (e.g.: Actions, Workflows, ...) :param cls: REST Resource class (e.g.: Action, Workflow, ...) :param get_all_function: Request function to get all elements with filtering (limit, marker, sort_keys, sort_dirs, fields) :param get_function: Function used to fetch the marker :param resource_function: Optional, function used to fetch additional data :param marker: Optional. Pagination marker for large data sets. :param limit: Optional. Maximum number of resources to return in a single result. Default value is None for backward compatibility. :param sort_keys: Optional. List of columns to sort results by. Default: ['created_at']. :param sort_dirs: Optional. List of directions to sort corresponding to sort_keys, "asc" or "desc" can be chosen. Default: ['asc']. :param fields: Optional. A specified list of fields of the resource to be returned. 'id' will be included automatically in fields if it's provided, since it will be used when constructing 'next' link. :param filters: Optional. A specified dictionary of filters to match. :param all_projects: Optional. Get resources of all projects. """ sort_keys = ['created_at'] if sort_keys is None else sort_keys sort_dirs = ['asc'] if sort_dirs is None else sort_dirs fields = [] if fields is None else fields if fields and 'id' not in fields: fields.insert(0, 'id') validate_query_params(limit, sort_keys, sort_dirs) validate_fields(fields, cls.get_fields()) # Admin user can get all tenants resources, no matter they are private or # public. insecure = False if (all_projects or (auth_ctx.ctx().is_admin and filters.get('project_id', ''))): insecure = True marker_obj = None if marker: marker_obj = get_function(marker) def _get_all_function(): with db_api.transaction(): db_models = get_all_function( limit=limit, marker=marker_obj, sort_keys=sort_keys, sort_dirs=sort_dirs, insecure=insecure, **filters ) for db_model in db_models: if resource_function: rest_resource = resource_function(db_model) else: rest_resource = cls.from_db_model(db_model) rest_resources.append(rest_resource) rest_resources = [] r = create_db_retry_object() # If only certain fields are requested then we ignore "resource_function" # parameter because it doesn't make sense anymore. if fields: # Use retries to prevent possible failures. db_list = r.call( get_all_function, limit=limit, marker=marker_obj, sort_keys=sort_keys, sort_dirs=sort_dirs, fields=fields, insecure=insecure, **filters ) for obj_values in db_list: # Note: in case if only certain fields have been requested # "db_list" contains tuples with values of db objects. rest_resources.append( cls.from_tuples(zip(fields, obj_values)) ) else: r.call(_get_all_function) return list_cls.convert_with_links( rest_resources, limit, pecan.request.host_url, sort_keys=','.join(sort_keys), sort_dirs=','.join(sort_dirs), fields=','.join(fields) if fields else '', **filters ) class MistralRetrying(tenacity.Retrying): def call(self, fn, *args, **kwargs): try: return super(MistralRetrying, self).call(fn, *args, **kwargs) except tenacity.RetryError: raise exc.MistralError("The service is temporarily unavailable") def create_db_retry_object(): return MistralRetrying( retry=tenacity.retry_if_exception_type( (sa.exc.OperationalError, db_exc.DBConnectionError) ), stop=tenacity.stop_after_attempt(10), wait=tenacity.wait_incrementing(increment=0.2) # 0.2 seconds ) mistral-6.0.0/mistral/utils/openstack/0000775000175100017510000000000013245513604020021 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/utils/openstack/keystone.py0000666000175100017510000002157613245513262022251 0ustar zuulzuul00000000000000# Copyright (c) 2013 Mirantis Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import keystoneauth1.identity.generic as auth_plugins from keystoneauth1 import loading from keystoneauth1 import session as ks_session from keystoneauth1.token_endpoint import Token from keystoneclient import service_catalog as ks_service_catalog from keystoneclient.v3 import client as ks_client from keystoneclient.v3 import endpoints as ks_endpoints from oslo_config import cfg from oslo_utils import timeutils import six from mistral import context from mistral import exceptions CONF = cfg.CONF CONF.register_opt(cfg.IntOpt('timeout'), group='keystone_authtoken') def client(): ctx = context.ctx() auth_url = ctx.auth_uri or CONF.keystone_authtoken.auth_uri cl = ks_client.Client( user_id=ctx.user_id, token=ctx.auth_token, tenant_id=ctx.project_id, auth_url=auth_url ) cl.management_url = auth_url return cl def _determine_verify(ctx): if ctx.insecure: return False elif ctx.auth_cacert: return ctx.auth_cacert else: return True def get_session_and_auth(context, **kwargs): """Get session and auth parameters :param context: action context :return: dict to be used as kwargs for client serviceinitialization """ if not context: raise AssertionError('context is mandatory') project_endpoint = get_endpoint_for_project(**kwargs) endpoint = format_url( project_endpoint.url, { 'tenant_id': context.project_id, 'project_id': context.project_id } ) auth = Token(endpoint=endpoint, token=context.auth_token) auth_uri = context.auth_uri or CONF.keystone_authtoken.auth_uri ks_auth = Token( endpoint=auth_uri, token=context.auth_token ) session = ks_session.Session( auth=ks_auth, verify=_determine_verify(context) ) return { "session": session, "auth": auth } def _admin_client(trust_id=None): if CONF.keystone_authtoken.auth_type is None: auth_url = CONF.keystone_authtoken.auth_uri project_name = CONF.keystone_authtoken.admin_tenant_name # You can't use trust and project together if trust_id: project_name = None cl = ks_client.Client( username=CONF.keystone_authtoken.admin_user, password=CONF.keystone_authtoken.admin_password, project_name=project_name, auth_url=auth_url, trust_id=trust_id ) cl.management_url = auth_url return cl else: kwargs = {} if trust_id: # Remove project_name and project_id, since we need a trust scoped # auth object kwargs['project_name'] = None kwargs['project_domain_name'] = None kwargs['project_id'] = None kwargs['trust_id'] = trust_id auth = loading.load_auth_from_conf_options( CONF, 'keystone_authtoken', **kwargs ) sess = loading.load_session_from_conf_options( CONF, 'keystone_authtoken', auth=auth ) return ks_client.Client(session=sess) def client_for_admin(): return _admin_client() def client_for_trusts(trust_id): return _admin_client(trust_id=trust_id) def get_endpoint_for_project(service_name=None, service_type=None, region_name=None): if service_name is None and service_type is None: raise exceptions.MistralException( "Either 'service_name' or 'service_type' must be provided." ) ctx = context.ctx() service_catalog = obtain_service_catalog(ctx) # When region_name is not passed, first get from context as region_name # could be passed to rest api in http header ('X-Region-Name'). Otherwise, # just get region from mistral configuration. region = (region_name or ctx.region_name) if service_name == 'keystone': # Determining keystone endpoint should be done using # keystone_authtoken section as this option is special for keystone. region = region or CONF.keystone_authtoken.region_name else: region = region or CONF.openstack_actions.default_region service_endpoints = service_catalog.get_endpoints( service_name=service_name, service_type=service_type, region_name=region ) endpoint = None os_actions_endpoint_type = CONF.openstack_actions.os_actions_endpoint_type for endpoints in six.itervalues(service_endpoints): for ep in endpoints: # is V3 interface? if 'interface' in ep: interface_type = ep['interface'] if os_actions_endpoint_type in interface_type: endpoint = ks_endpoints.Endpoint( None, ep, loaded=True ) break # is V2 interface? if 'publicURL' in ep: endpoint_data = { 'url': ep['publicURL'], 'region': ep['region'] } endpoint = ks_endpoints.Endpoint( None, endpoint_data, loaded=True ) break if not endpoint: raise exceptions.MistralException( "No endpoints found [service_name=%s, service_type=%s," " region_name=%s]" % (service_name, service_type, region) ) else: return endpoint def obtain_service_catalog(ctx): token = ctx.auth_token if ctx.is_trust_scoped and is_token_trust_scoped(token): if ctx.trust_id is None: raise Exception( "'trust_id' must be provided in the admin context." ) trust_client = client_for_trusts(ctx.trust_id) token_data = trust_client.tokens.get_token_data( token, include_catalog=True ) response = token_data['token'] else: response = ctx.service_catalog # Target service catalog may not be passed via API. # If we don't have the catalog yet, it should be requested. if not response: response = client().tokens.get_token_data( token, include_catalog=True )['token'] if not response: raise exceptions.UnauthorizedException() service_catalog = ks_service_catalog.ServiceCatalog.factory(response) return service_catalog def get_keystone_endpoint_v2(): return get_endpoint_for_project('keystone', service_type='identity') def get_keystone_url_v2(): return get_endpoint_for_project('keystone', service_type='identity').url def format_url(url_template, values): # Since we can't use keystone module, we can do similar thing: # see https://github.com/openstack/keystone/blob/master/keystone/ # catalog/core.py#L42-L60 return url_template.replace('$(', '%(') % values def is_token_trust_scoped(auth_token): return 'OS-TRUST:trust' in client_for_admin().tokens.validate(auth_token) def get_admin_session(): """Returns a keystone session from Mistral's service credentials.""" if CONF.keystone_authtoken.auth_type is None: auth = auth_plugins.Password( CONF.keystone_authtoken.auth_uri, username=CONF.keystone_authtoken.admin_user, password=CONF.keystone_authtoken.admin_password, project_name=CONF.keystone_authtoken.admin_tenant_name, # NOTE(jaosorior): Once mistral supports keystone v3 properly, we # can fetch the following values from the configuration. user_domain_name='Default', project_domain_name='Default') return ks_session.Session(auth=auth) else: auth = loading.load_auth_from_conf_options( CONF, 'keystone_authtoken' ) return loading.load_session_from_conf_options( CONF, 'keystone_authtoken', auth=auth ) def will_expire_soon(expires_at): if not expires_at: return False stale_duration = CONF.expiration_token_duration assert stale_duration, "expiration_token_duration must be specified" expires = timeutils.parse_isotime(expires_at) return timeutils.is_soon(expires, stale_duration) mistral-6.0.0/mistral/utils/openstack/__init__.py0000666000175100017510000000000013245513262022122 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/utils/ssh_utils.py0000666000175100017510000001065413245513262020431 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from os import path import six from oslo_log import log as logging import paramiko from mistral import exceptions as exc KEY_PATH = path.expanduser("~/.ssh/") LOG = logging.getLogger(__name__) def _read_paramimko_stream(recv_func): result = '' buf = recv_func(1024) while buf != '': result += buf buf = recv_func(1024) return result def _to_paramiko_private_key(private_key_filename, password=None): if '../' in private_key_filename or '..\\' in private_key_filename: raise exc.DataAccessException( "Private key filename must not contain '..'. " "Actual: %s" % private_key_filename ) if private_key_filename.startswith('/'): private_key_path = private_key_filename else: private_key_path = KEY_PATH + private_key_filename return paramiko.RSAKey( filename=private_key_path, password=password ) def _connect(host, username, password=None, pkey=None, proxy=None): if isinstance(pkey, six.string_types): pkey = _to_paramiko_private_key(pkey, password) LOG.debug('Creating SSH connection to %s', host) ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh_client.connect( host, username=username, password=password, pkey=pkey, sock=proxy ) return ssh_client def _cleanup(ssh_client): ssh_client.close() def _execute_command(ssh_client, cmd, get_stderr=False, raise_when_error=True): try: chan = ssh_client.get_transport().open_session() chan.exec_command(cmd) # TODO(nmakhotkin): that could hang if stderr buffer overflows stdout = _read_paramimko_stream(chan.recv) stderr = _read_paramimko_stream(chan.recv_stderr) ret_code = chan.recv_exit_status() if ret_code and raise_when_error: raise RuntimeError("Cmd: %s\nReturn code: %s\nstdout: %s" % (cmd, ret_code, stdout)) if get_stderr: return ret_code, stdout, stderr else: return ret_code, stdout finally: _cleanup(ssh_client) def execute_command_via_gateway(cmd, host, username, private_key_filename, gateway_host, gateway_username=None, proxy_command=None, password=None): LOG.debug('Creating SSH connection') private_key = _to_paramiko_private_key(private_key_filename, password) proxy = None if proxy_command: LOG.debug('Creating proxy using command: %s', proxy_command) proxy = paramiko.ProxyCommand(proxy_command) _proxy_ssh_client = paramiko.SSHClient() _proxy_ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) LOG.debug('Connecting to proxy gateway at: %s', gateway_host) if not gateway_username: gateway_username = username _proxy_ssh_client.connect( gateway_host, username=gateway_username, pkey=private_key, sock=proxy ) proxy = _proxy_ssh_client.get_transport().open_session() proxy.exec_command("nc {0} 22".format(host)) ssh_client = _connect( host, username=username, pkey=private_key, proxy=proxy ) try: return _execute_command( ssh_client, cmd, get_stderr=False, raise_when_error=True ) finally: _cleanup(_proxy_ssh_client) def execute_command(cmd, host, username, password=None, private_key_filename=None, get_stderr=False, raise_when_error=True): ssh_client = _connect(host, username, password, private_key_filename) LOG.debug("Executing command %s", cmd) return _execute_command(ssh_client, cmd, get_stderr, raise_when_error) mistral-6.0.0/mistral/utils/profiler.py0000666000175100017510000000302713245513262020232 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import json from oslo_config import cfg from oslo_log import log as logging import osprofiler.profiler import osprofiler.web PROFILER_LOG = logging.getLogger(cfg.CONF.profiler.profiler_log_name) def log_to_file(info, context=None): attrs = [ str(info['timestamp']), info['base_id'], info['parent_id'], info['trace_id'], info['name'] ] if 'info' in info and 'db' in info['info']: db_info = copy.deepcopy(info['info']['db']) db_info['params'] = { k: str(v) if isinstance(v, datetime.datetime) else v for k, v in db_info.get('params', {}).items() } attrs.append(json.dumps(db_info)) PROFILER_LOG.info(' '.join(attrs)) def setup(binary, host): if cfg.CONF.profiler.enabled: osprofiler.notifier.set(log_to_file) osprofiler.web.enable(cfg.CONF.profiler.hmac_keys) mistral-6.0.0/mistral/utils/inspect_utils.py0000666000175100017510000000436413245513262021302 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import json def get_public_fields(obj): """Returns only public fields from object or class.""" public_attributes = [attr for attr in dir(obj) if not attr.startswith("_")] public_fields = {} for attribute_str in public_attributes: attr = getattr(obj, attribute_str) is_field = not (inspect.isbuiltin(attr) or inspect.isfunction(attr) or inspect.ismethod(attr) or isinstance(attr, property)) if is_field: public_fields[attribute_str] = attr return public_fields def get_docstring(obj): return inspect.getdoc(obj) def get_arg_list(func): argspec = inspect.getargspec(func) args = argspec.args if 'self' in args: args.remove('self') return args def get_arg_list_as_str(func): args = getattr(func, "__arguments__", None) if args: return args argspec = inspect.getargspec(func) defs = list(argspec.defaults or []) args = get_arg_list(func) diff_args_defs = len(args) - len(defs) arg_str_list = [] for index, default in enumerate(args): if index >= diff_args_defs: try: arg_str_list.append( "%s=%s" % ( args[index], json.dumps(defs[index - diff_args_defs]) ) ) except TypeError: pass else: arg_str_list.append("%s" % args[index]) if argspec.keywords: arg_str_list.append("**%s" % argspec.keywords) return ", ".join(arg_str_list) mistral-6.0.0/mistral/utils/wf_trace.py0000666000175100017510000000303213245513262020176 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # Copyright 2015 - Mirantis, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from mistral.db.v2.sqlalchemy import models cfg.CONF.import_opt('workflow_trace_log_name', 'mistral.config') WF_TRACE = logging.getLogger(cfg.CONF.workflow_trace_log_name) def info(obj, msg, *args, **kvargs): """Logs workflow trace record for Execution or Task. :param obj: If type is TaskExecution or WorkflowExecution, appends execution_id and task_id to the log message. The rest of parameters follow logger.info(...) """ debug_info = '' if type(obj) is models.TaskExecution: exec_id = obj.workflow_execution_id task_id = obj.id debug_info = '(execution_id=%s task_id=%s)' % (exec_id, task_id) elif type(obj) is models.WorkflowExecution: debug_info = '(execution_id=%s)' % obj.id if debug_info: msg = '%s %s' % (msg, debug_info) WF_TRACE.info(msg, *args, **kvargs) mistral-6.0.0/mistral/utils/filter_utils.py0000666000175100017510000000617513245513262021124 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six def create_filters_from_request_params(**params): """Create filters from REST request parameters. :param req_params: REST request parameters. :return: filters dictionary. """ filters = {} for column, data in params.items(): if data is not None: if isinstance(data, six.string_types): f_type, value = _extract_filter_type_and_value(data) create_or_update_filter(column, value, f_type, filters) else: create_or_update_filter(column, data, _filter=filters) return filters def create_or_update_filter(column, value, filter_type='eq', _filter=None): """Create or Update filter. :param column: Column name by which user want to filter. :param value: Column value. :param filter_type: filter type. Filter type can be 'eq', 'neq', 'gt', 'gte', 'lte', 'in', 'lt', 'nin'. Default is 'eq'. :param _filter: Optional. If provided same filter dictionary will be updated. :return: filter dictionary. """ if _filter is None: _filter = {} _filter[column] = {filter_type: value} return _filter def _extract_filter_type_and_value(data): """Extract filter type and its value from the data. :param data: REST parameter value from which filter type and value can be get. It should be in format of 'filter_type:value'. :return: filter type and value. """ if data.startswith("in:"): value = list(six.text_type(data[3:]).split(",")) filter_type = 'in' elif data.startswith("nin:"): value = list(six.text_type(data[4:]).split(",")) filter_type = 'nin' elif data.startswith("neq:"): value = six.text_type(data[4:]) filter_type = 'neq' elif data.startswith("gt:"): value = six.text_type(data[3:]) filter_type = 'gt' elif data.startswith("gte:"): value = six.text_type(data[4:]) filter_type = 'gte' elif data.startswith("lt:"): value = six.text_type(data[3:]) filter_type = 'lt' elif data.startswith("lte:"): value = six.text_type(data[4:]) filter_type = 'lte' elif data.startswith("eq:"): value = six.text_type(data[3:]) filter_type = 'eq' elif data.startswith("has:"): value = six.text_type(data[4:]) filter_type = 'has' else: value = data filter_type = 'eq' return filter_type, value mistral-6.0.0/mistral/utils/__init__.py0000666000175100017510000003306413245513272020154 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - Huawei Technologies Co. Ltd # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import datetime import functools import json import os from os import path import shutil import socket import sys import tempfile import threading import eventlet from eventlet import corolocal from oslo_concurrency import processutils from oslo_log import log as logging from oslo_utils import timeutils from oslo_utils import uuidutils import pkg_resources as pkg import random from mistral import exceptions as exc from mistral import version # Thread local storage. _th_loc_storage = threading.local() ACTION_TASK_TYPE = 'ACTION' WORKFLOW_TASK_TYPE = 'WORKFLOW' def generate_unicode_uuid(): return uuidutils.generate_uuid() def is_valid_uuid(uuid_string): return uuidutils.is_uuid_like(uuid_string) def _get_greenlet_local_storage(): greenlet_id = corolocal.get_ident() greenlet_locals = getattr(_th_loc_storage, "greenlet_locals", None) if not greenlet_locals: greenlet_locals = {} _th_loc_storage.greenlet_locals = greenlet_locals if greenlet_id in greenlet_locals: return greenlet_locals[greenlet_id] else: return None def has_thread_local(var_name): gl_storage = _get_greenlet_local_storage() return gl_storage and var_name in gl_storage def get_thread_local(var_name): if not has_thread_local(var_name): return None return _get_greenlet_local_storage()[var_name] def set_thread_local(var_name, val): if val is None and has_thread_local(var_name): gl_storage = _get_greenlet_local_storage() # Delete variable from greenlet local storage. if gl_storage: del gl_storage[var_name] # Delete the entire greenlet local storage from thread local storage. if gl_storage and len(gl_storage) == 0: del _th_loc_storage.greenlet_locals[corolocal.get_ident()] if val is not None: gl_storage = _get_greenlet_local_storage() if not gl_storage: gl_storage = _th_loc_storage.greenlet_locals[ corolocal.get_ident()] = {} gl_storage[var_name] = val def log_exec(logger, level=logging.DEBUG): """Decorator for logging function execution. By default, target function execution is logged with DEBUG level. """ def _decorator(func): @functools.wraps(func) def _logged(*args, **kw): params_repr = ("[args=%s, kw=%s]" % (str(args), str(kw)) if args or kw else "") func_repr = ("Called method [name=%s, doc='%s', params=%s]" % (func.__name__, func.__doc__, params_repr)) logger.log(level, func_repr) return func(*args, **kw) _logged.__doc__ = func.__doc__ return _logged return _decorator def merge_dicts(left, right, overwrite=True): """Merges two dictionaries. Values of right dictionary recursively get merged into left dictionary. :param left: Left dictionary. :param right: Right dictionary. :param overwrite: If False, left value will not be overwritten if exists. """ if left is None: return right if right is None: return left for k, v in right.items(): if k not in left: left[k] = v else: left_v = left[k] if isinstance(left_v, dict) and isinstance(v, dict): merge_dicts(left_v, v, overwrite=overwrite) elif overwrite: left[k] = v return left def update_dict(left, right): """Updates left dict with content from right dict :param left: Left dict. :param right: Right dict. :return: the updated left dictionary. """ if left is None: return right if right is None: return left left.update(right) return left def get_file_list(directory): base_path = pkg.resource_filename( version.version_info.package, directory ) return [path.join(base_path, f) for f in os.listdir(base_path) if path.isfile(path.join(base_path, f))] def cut_dict(d, length=100): """Removes dictionary entries according to the given length. This method removes a number of entries, if needed, so that a string representation would fit into the given length. The intention of this method is to optimize truncation of string representation for dictionaries where the exact precision is not critically important. Otherwise, we'd always have to convert a dict into a string first and then shrink it to a needed size which will increase memory footprint and reduce performance in case of large dictionaries (i.e. tens of thousands entries). Note that the method, due to complexity of the algorithm, has some non-zero precision which depends on exact keys and values placed into the dict. So for some dicts their reduced string representations will be only approximately equal to the given value (up to around several chars difference). :param d: A dictionary. :param length: A length limiting the dictionary string representation. :return: A dictionary which is a subset of the given dictionary. """ if not isinstance(d, dict): raise ValueError("A dictionary is expected, got: %s" % type(d)) res = "{" idx = 0 for key, value in d.items(): k = str(key) v = str(value) # Processing key. new_len = len(res) + len(k) is_str = isinstance(key, str) if is_str: new_len += 2 if new_len >= length: res += "'%s..." % k[:length - new_len] if is_str else "%s..." % k break else: res += "'%s'" % k if is_str else k res += ": " # Processing value. new_len = len(res) + len(v) is_str = isinstance(value, str) if is_str: new_len += 2 if new_len >= length: res += "'%s..." % v[:length - new_len] if is_str else "%s..." % v break else: res += "'%s'" % v if is_str else v res += ', ' if idx < len(d) - 1 else '}' if len(res) >= length: res += '...' break idx += 1 return res def cut_list(l, length=100): if not isinstance(l, list): raise ValueError("A list is expected, got: %s" % type(l)) res = '[' for idx, item in enumerate(l): s = str(item) new_len = len(res) + len(s) is_str = isinstance(item, str) if is_str: new_len += 2 if new_len >= length: res += "'%s..." % s[:length - new_len] if is_str else "%s..." % s break else: res += "'%s'" % s if is_str else s res += ', ' if idx < len(l) - 1 else ']' return res def cut_string(s, length=100): if len(s) > length: return "%s..." % s[:length] return s def cut(data, length=100): if not data: return data if isinstance(data, list): return cut_list(data, length=length) if isinstance(data, dict): return cut_dict(data, length=length) return cut_string(str(data), length=length) def cut_by_kb(data, kilobytes): if kilobytes <= 0: return cut(data) length = get_number_of_chars_from_kilobytes(kilobytes) return cut(data, length) def cut_by_char(data, length): return cut(data, length) def iter_subclasses(cls, _seen=None): """Generator over all subclasses of a given class in depth first order.""" if not isinstance(cls, type): raise TypeError('iter_subclasses must be called with new-style class' ', not %.100r' % cls) _seen = _seen or set() try: subs = cls.__subclasses__() except TypeError: # fails only when cls is type subs = cls.__subclasses__(cls) for sub in subs: if sub not in _seen: _seen.add(sub) yield sub for _sub in iter_subclasses(sub, _seen): yield _sub def random_sleep(limit=1): """Sleeps for a random period of time not exceeding the given limit. Mostly intended to be used by tests to emulate race conditions. :param limit: Float number of seconds that a sleep period must not exceed. """ seconds = random.Random().randint(0, limit * 1000) * 0.001 print("Sleep: %s sec..." % seconds) eventlet.sleep(seconds) class NotDefined(object): """Marker of an empty value. In a number of cases None can't be used to express the semantics of a not defined value because None is just a normal value rather than a value set to denote that it's not defined. This class can be used in such cases instead of None. """ pass def get_number_of_chars_from_kilobytes(kilobytes): bytes_per_char = sys.getsizeof('s') - sys.getsizeof('') total_number_of_chars = int(kilobytes * 1024 / bytes_per_char) return total_number_of_chars def get_dict_from_string(string, delimiter=','): if not string: return {} kv_dicts = [] for kv_pair_str in string.split(delimiter): kv_str = kv_pair_str.strip() kv_list = kv_str.split('=') if len(kv_list) > 1: try: value = json.loads(kv_list[1]) except ValueError: value = kv_list[1] kv_dicts += [{kv_list[0]: value}] else: kv_dicts += [kv_list[0]] return get_dict_from_entries(kv_dicts) def get_dict_from_entries(entries): """Transforms a list of entries into dictionary. :param entries: A list of entries. If an entry is a dictionary the method simply updates the result dictionary with its content. If an entry is not a dict adds {entry, NotDefined} into the result. """ result = {} for e in entries: if isinstance(e, dict): result.update(e) else: # NOTE(kong): we put NotDefined here as the value of # param without value specified, to distinguish from # the valid values such as None, ''(empty string), etc. result[e] = NotDefined return result def get_process_identifier(): """Gets current running process identifier.""" return "%s_%s" % (socket.gethostname(), os.getpid()) @contextlib.contextmanager def tempdir(**kwargs): argdict = kwargs.copy() if 'dir' not in argdict: argdict['dir'] = '/tmp/' tmpdir = tempfile.mkdtemp(**argdict) try: yield tmpdir finally: try: shutil.rmtree(tmpdir) except OSError as e: raise exc.DataAccessException( "Failed to delete temp dir %(dir)s (reason: %(reason)s)" % {'dir': tmpdir, 'reason': e} ) def save_text_to(text, file_path, overwrite=False): if os.path.exists(file_path) and not overwrite: raise exc.DataAccessException( "Cannot save data to file. File %s already exists." ) with open(file_path, 'w') as f: f.write(text) def generate_key_pair(key_length=2048): """Create RSA key pair with specified number of bits in key. Returns tuple of private and public keys. """ with tempdir() as tmpdir: keyfile = os.path.join(tmpdir, 'tempkey') args = [ 'ssh-keygen', '-q', # quiet '-N', '', # w/o passphrase '-t', 'rsa', # create key of rsa type '-f', keyfile, # filename of the key file '-C', 'Generated-by-Mistral' # key comment ] if key_length is not None: args.extend(['-b', key_length]) processutils.execute(*args) if not os.path.exists(keyfile): raise exc.DataAccessException( "Private key file hasn't been created" ) private_key = open(keyfile).read() public_key_path = keyfile + '.pub' if not os.path.exists(public_key_path): raise exc.DataAccessException( "Public key file hasn't been created" ) public_key = open(public_key_path).read() return private_key, public_key def utc_now_sec(): """Returns current time and drops microseconds.""" return timeutils.utcnow().replace(microsecond=0) def datetime_to_str(val, sep=' '): """Converts datetime value to string. If the given value is not an instance of datetime then the method returns the same value. :param val: datetime value. :param sep: Separator between date and time. :return: Datetime as a string. """ if isinstance(val, datetime.datetime): return val.isoformat(sep) return val def datetime_to_str_in_dict(d, key, sep=' '): """Converts datetime value in te given dict to string. :param d: A dictionary. :param key: The key for which we need to convert the value. :param sep: Separator between date and time. """ val = d.get(key) if val is not None: d[key] = datetime_to_str(d[key], sep=sep) mistral-6.0.0/mistral/utils/javascript.py0000666000175100017510000000376213245513262020564 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import json from oslo_utils import importutils from mistral import config as cfg from mistral import exceptions as exc _PYV8 = importutils.try_import('PyV8') _V8EVAL = importutils.try_import('v8eval') class JSEvaluator(object): @classmethod @abc.abstractmethod def evaluate(cls, script, context): """Executes given JavaScript.""" pass class PyV8Evaluator(JSEvaluator): @classmethod def evaluate(cls, script, context): if not _PYV8: raise exc.MistralException( "PyV8 module is not available. Please install PyV8." ) with _PYV8.JSContext() as ctx: # Prepare data context and way for interaction with it. ctx.eval('$ = %s' % json.dumps(context)) result = ctx.eval(script) return _PYV8.convert(result) class V8EvalEvaluator(JSEvaluator): @classmethod def evaluate(cls, script, context): if not _V8EVAL: raise exc.MistralException( "v8eval module is not available. Please install v8eval." ) v8 = _V8EVAL.V8() return v8.eval(('$ = %s; %s' % (json.dumps(context), script)).encode( encoding='UTF-8')) EVALUATOR = (V8EvalEvaluator if cfg.CONF.js_implementation == 'v8eval' else PyV8Evaluator) def evaluate(script, context): return EVALUATOR.evaluate(script, context) mistral-6.0.0/mistral/utils/expression_utils.py0000666000175100017510000002415113245513262022030 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from functools import partial import warnings from oslo_log import log as logging from oslo_serialization import jsonutils from stevedore import extension import yaml import yaql from mistral.db.v2 import api as db_api from mistral import utils LOG = logging.getLogger(__name__) ROOT_YAQL_CONTEXT = None def get_yaql_context(data_context): global ROOT_YAQL_CONTEXT if not ROOT_YAQL_CONTEXT: ROOT_YAQL_CONTEXT = yaql.create_context() _register_yaql_functions(ROOT_YAQL_CONTEXT) new_ctx = ROOT_YAQL_CONTEXT.create_child_context() new_ctx['$'] = data_context if isinstance(data_context, dict): new_ctx['__env'] = data_context.get('__env') new_ctx['__execution'] = data_context.get('__execution') new_ctx['__task_execution'] = data_context.get('__task_execution') return new_ctx def get_jinja_context(data_context): new_ctx = { '_': data_context } _register_jinja_functions(new_ctx) if isinstance(data_context, dict): new_ctx['__env'] = data_context.get('__env') new_ctx['__execution'] = data_context.get('__execution') new_ctx['__task_execution'] = data_context.get('__task_execution') return new_ctx def get_custom_functions(): """Get custom functions Retrieves the list of custom evaluation functions """ functions = dict() mgr = extension.ExtensionManager( namespace='mistral.expression.functions', invoke_on_load=False ) for name in mgr.names(): functions[name] = mgr[name].plugin return functions def _register_yaql_functions(yaql_ctx): functions = get_custom_functions() for name in functions: yaql_ctx.register_function(functions[name], name=name) def _register_jinja_functions(jinja_ctx): functions = get_custom_functions() for name in functions: jinja_ctx[name] = partial(functions[name], jinja_ctx['_']) # Additional YAQL functions needed by Mistral. # If a function name ends with underscore then it doesn't need to pass # the name of the function when context registers it. def env_(context): return context['__env'] def executions_(context, id=None, root_execution_id=None, state=None, from_time=None, to_time=None ): filter = {} if id is not None: filter = utils.filter_utils.create_or_update_filter( 'id', id, "eq", filter ) if root_execution_id is not None: filter = utils.filter_utils.create_or_update_filter( 'root_execution_id', root_execution_id, "eq", filter ) if state is not None: filter = utils.filter_utils.create_or_update_filter( 'state', state, "eq", filter ) if from_time is not None: filter = utils.filter_utils.create_or_update_filter( 'created_at', from_time, "gte", filter ) if to_time is not None: filter = utils.filter_utils.create_or_update_filter( 'created_at', to_time, "lt", filter ) return db_api.get_workflow_executions(**filter) def execution_(context): wf_ex = db_api.get_workflow_execution(context['__execution']['id']) return { 'id': wf_ex.id, 'name': wf_ex.name, 'spec': wf_ex.spec, 'input': wf_ex.input, 'params': wf_ex.params, 'created_at': wf_ex.created_at.isoformat(' '), 'updated_at': wf_ex.updated_at.isoformat(' ') } def json_pp_(context, data=None): warnings.warn( "json_pp was deprecated in Queens and will be removed in the S cycle. " "The json_dump expression function can be used for outputting JSON", DeprecationWarning ) return jsonutils.dumps( data or context, indent=4 ).replace("\\n", "\n").replace(" \n", "\n") def json_dump_(context, data): return jsonutils.dumps(data, indent=4) def yaml_dump_(context, data): return yaml.safe_dump(data, default_flow_style=False) def task_(context, task_name=None): # This section may not exist in a context if it's calculated not in # task scope. cur_task = context['__task_execution'] # 1. If task_name is empty it's 'task()' use case, we need to get the # current task. # 2. if task_name is not empty but it's equal to the current task name # we need to take exactly the current instance of this task. Otherwise # there may be ambiguity if there are many tasks with this name. # 3. In other case we just find a task in DB by the given name. if cur_task and (not task_name or cur_task['name'] == task_name): task_ex = db_api.get_task_execution(cur_task['id']) else: task_execs = db_api.get_task_executions( workflow_execution_id=context['__execution']['id'], name=task_name ) # TODO(rakhmerov): Account for multiple executions (i.e. in case of # cycles). task_ex = task_execs[-1] if len(task_execs) > 0 else None if not task_ex: LOG.warning( "Task '%s' not found by the task() expression function", task_name ) return None # We don't use to_dict() db model method because not all fields # make sense for user. return _convert_to_user_model(task_ex) def _should_pass_filter(t, state, flat): # Start from assuming all is true, check only if needed. state_match = True flat_match = True if state: state_match = t['state'] == state if flat: is_action = t['type'] == utils.ACTION_TASK_TYPE if not is_action: nested_execs = db_api.get_workflow_executions( task_execution_id=t.id ) for n in nested_execs: flat_match = flat_match and n.state != t.state return state_match and flat_match def _get_tasks_from_db(workflow_execution_id=None, recursive=False, state=None, flat=False): task_execs = [] nested_task_exs = [] kwargs = {} if workflow_execution_id: kwargs['workflow_execution_id'] = workflow_execution_id # We can't add state to query if we want to filter by workflow_execution_id # recursively. There might be a workflow_execution in one state with a # nested workflow execution that has a task in the desired state until we # have an optimization for queering all workflow executions under a given # top level workflow execution, this is the way to go. if state and not (workflow_execution_id and recursive): kwargs['state'] = state task_execs.extend(db_api.get_task_executions(**kwargs)) # If it is not recursive no need to check nested workflows. # If there is no workflow execution id, we already have all we need, and # doing more queries will just create duplication in the results. if recursive and workflow_execution_id: for t in task_execs: if t.type == utils.WORKFLOW_TASK_TYPE: # Get nested workflow execution that matches the task. nested_workflow_executions = db_api.get_workflow_executions( task_execution_id=t.id ) # There might be zero nested executions. for nested_workflow_execution in nested_workflow_executions: nested_task_exs.extend( _get_tasks_from_db( nested_workflow_execution.id, recursive, state, flat ) ) if state or flat: # Filter by state and flat. task_execs = [ t for t in task_execs if _should_pass_filter(t, state, flat) ] # The nested tasks were already filtered, since this is a recursion. task_execs.extend(nested_task_exs) return task_execs def tasks_(context, workflow_execution_id=None, recursive=False, state=None, flat=False): task_execs = _get_tasks_from_db( workflow_execution_id, recursive, state, flat ) # Convert task_execs to user model and return. return [_convert_to_user_model(t) for t in task_execs] def _convert_to_user_model(task_ex): # Importing data_flow in order to break cycle dependency between modules. from mistral.workflow import data_flow # We don't use to_dict() db model method because not all fields # make sense for user. return { 'id': task_ex.id, 'name': task_ex.name, 'spec': task_ex.spec, 'state': task_ex.state, 'state_info': task_ex.state_info, 'result': data_flow.get_task_execution_result(task_ex), 'published': task_ex.published, 'type': task_ex.type, 'workflow_execution_id': task_ex.workflow_execution_id, 'created_at': task_ex.created_at.isoformat(' '), 'updated_at': task_ex.updated_at.isoformat(' ') if task_ex.updated_at is not None else None } def uuid_(context=None): return utils.generate_unicode_uuid() def global_(context, var_name): wf_ex = db_api.get_workflow_execution(context['__execution']['id']) return wf_ex.context.get(var_name) def json_parse_(context, data): return jsonutils.loads(data) def yaml_parse_(context, data): return yaml.safe_load(data) mistral-6.0.0/mistral/workflow/0000775000175100017510000000000013245513604016544 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/workflow/lookup_utils.py0000666000175100017510000000777513245513272021672 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ The intention of the module is providing various DB related lookup functions for more convenient usage within the workflow engine. Some of the functions may provide caching capabilities. WARNING: Oftentimes, persistent objects returned by the methods in this module won't be attached to the current DB SQLAlchemy session because they are returned from the cache and therefore they need to be used carefully without trying to do any lazy loading etc. These objects are also not suitable for re-attaching them to a session in order to update their persistent DB state. Mostly, they are useful for doing any kind of fast lookups with in order to make some decision based on their state. """ import cachetools import threading from mistral.db.v2 import api as db_api from mistral.workflow import states def _create_lru_cache_for_workflow_execution(wf_ex_id): return cachetools.LRUCache(maxsize=500) # This is a two-level caching structure. # First level: [ -> ] # Second level (task execution cache): [ -> ] # The first level (by workflow execution id) allows to invalidate # needed cache entry when the workflow gets completed. _TASK_EX_CACHE = cachetools.LRUCache( maxsize=100, missing=_create_lru_cache_for_workflow_execution ) _CACHE_LOCK = threading.RLock() def find_task_executions_by_name(wf_ex_id, task_name): """Finds task executions by workflow execution id and task name. :param wf_ex_id: Workflow execution id. :param task_name: Task name. :return: Task executions (possibly a cached value). """ with _CACHE_LOCK: t_execs = _TASK_EX_CACHE[wf_ex_id].get(task_name) if t_execs: return t_execs t_execs = db_api.get_task_executions( workflow_execution_id=wf_ex_id, name=task_name, sort_keys=[] # disable sorting ) # We can cache only finished tasks because they won't change. all_finished = ( t_execs and all([states.is_completed(t_ex.state) for t_ex in t_execs]) ) if all_finished: with _CACHE_LOCK: _TASK_EX_CACHE[wf_ex_id][task_name] = t_execs return t_execs def find_task_executions_by_spec(wf_ex_id, task_spec): return find_task_executions_by_name(wf_ex_id, task_spec.get_name()) def find_task_executions_by_specs(wf_ex_id, task_specs): res = [] for t_s in task_specs: res = res + find_task_executions_by_spec(wf_ex_id, t_s) return res def find_task_executions_with_state(wf_ex_id, state): return db_api.get_task_executions( workflow_execution_id=wf_ex_id, state=state ) def find_successful_task_executions(wf_ex_id): return find_task_executions_with_state(wf_ex_id, states.SUCCESS) def find_error_task_executions(wf_ex_id): return find_task_executions_with_state(wf_ex_id, states.ERROR) def find_cancelled_task_executions(wf_ex_id): return find_task_executions_with_state(wf_ex_id, states.CANCELLED) def find_completed_task_executions(wf_ex_id): return db_api.get_completed_task_executions(workflow_execution_id=wf_ex_id) def get_task_execution_cache_size(): return len(_TASK_EX_CACHE) def invalidate_cached_task_executions(wf_ex_id): with _CACHE_LOCK: if wf_ex_id in _TASK_EX_CACHE: del _TASK_EX_CACHE[wf_ex_id] def clear_caches(): with _CACHE_LOCK: _TASK_EX_CACHE.clear() mistral-6.0.0/mistral/workflow/reverse_workflow.py0000666000175100017510000001436713245513262022540 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import networkx as nx from networkx.algorithms import traversal from mistral import exceptions as exc from mistral.workflow import base from mistral.workflow import commands from mistral.workflow import data_flow from mistral.workflow import lookup_utils from mistral.workflow import states class ReverseWorkflowController(base.WorkflowController): """'Reverse workflow controller. This controller implements the workflow pattern which is based on dependencies between tasks, i.e. each task in a workflow graph may be dependent on other tasks. To run this type of workflow user must specify a task name that serves a target node in the graph that the algorithm should come to by resolving all dependencies. For example, if there's a workflow consisting of two tasks 'A' and 'B' where 'A' depends on 'B' and if we specify a target task name 'A' then the controller first will run task 'B' and then, when a dependency of 'A' is resolved, will run task 'A'. """ __workflow_type__ = "reverse" def _find_next_commands(self, task_ex): """Finds all tasks with resolved dependencies. This method finds all tasks with resolved dependencies and returns them in the form of workflow commands. """ cmds = super(ReverseWorkflowController, self)._find_next_commands( task_ex ) # TODO(rakhmerov): Adapt reverse workflow to non-locking model. # 1. Task search must use task_ex parameter. # 2. When a task has more than one dependency it's possible to # get into 'phantom read' phenomena and create multiple instances # of the same task. So 'unique_key' in conjunction with 'wait_flag' # must be used to prevent this. task_specs = self._find_task_specs_with_satisfied_dependencies() return cmds + [ commands.RunTask( self.wf_ex, self.wf_spec, t_s, self.get_task_inbound_context(t_s) ) for t_s in task_specs ] def _get_target_task_specification(self): task_name = self.wf_ex.params.get('task_name') task_spec = self.wf_spec.get_tasks().get(task_name) if not task_spec: raise exc.WorkflowException( 'Invalid task name [wf_spec=%s, task_name=%s]' % (self.wf_spec, task_name) ) return task_spec def _get_upstream_task_executions(self, task_spec): t_specs = [ self.wf_spec.get_tasks()[t_name] for t_name in self.wf_spec.get_task_requires(task_spec) or [] ] return list( filter( lambda t_e: t_e.state == states.SUCCESS, lookup_utils.find_task_executions_by_specs( self.wf_ex.id, t_specs ) ) ) def evaluate_workflow_final_context(self): task_execs = lookup_utils.find_task_executions_by_spec( self.wf_ex.id, self._get_target_task_specification() ) # NOTE: For reverse workflow there can't be multiple # executions for one task. assert len(task_execs) <= 1 if len(task_execs) == 1: return data_flow.evaluate_task_outbound_context(task_execs[0]) else: return {} def get_logical_task_state(self, task_ex): # TODO(rakhmerov): Implement. return base.TaskLogicalState(task_ex.state, task_ex.state_info) def is_error_handled_for(self, task_ex): return task_ex.state != states.ERROR def all_errors_handled(self): task_execs = lookup_utils.find_error_task_executions(self.wf_ex.id) return len(task_execs) == 0 def _find_task_specs_with_satisfied_dependencies(self): """Given a target task name finds tasks with no dependencies. :return: Task specifications with no dependencies. """ tasks_spec = self.wf_spec.get_tasks() graph = self._build_graph(tasks_spec) # Unwind tasks from the target task # and filter out tasks with dependencies. return [ t_s for t_s in traversal.dfs_postorder_nodes( graph.reverse(), self._get_target_task_specification() ) if self._is_satisfied_task(t_s) ] def _is_satisfied_task(self, task_spec): if lookup_utils.find_task_executions_by_spec( self.wf_ex.id, task_spec): return False if not self.wf_spec.get_task_requires(task_spec): return True success_t_names = set() for t_ex in self.wf_ex.task_executions: if t_ex.state == states.SUCCESS: success_t_names.add(t_ex.name) return not ( set(self.wf_spec.get_task_requires(task_spec)) - success_t_names ) def _build_graph(self, tasks_spec): graph = nx.DiGraph() # Add graph nodes. for t in tasks_spec: graph.add_node(t) # Add graph edges. for t_spec in tasks_spec: for dep_t_spec in self._get_dependency_tasks(tasks_spec, t_spec): graph.add_edge(dep_t_spec, t_spec) return graph def _get_dependency_tasks(self, tasks_spec, task_spec): dep_task_names = self.wf_spec.get_task_requires(task_spec) if len(dep_task_names) == 0: return [] dep_t_specs = set() for t_spec in tasks_spec: for t_name in dep_task_names: if t_name == t_spec.get_name(): dep_t_specs.add(t_spec) return dep_t_specs mistral-6.0.0/mistral/workflow/data_flow.py0000666000175100017510000002233013245513262021060 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from oslo_config import cfg from oslo_log import log as logging from mistral import context as auth_ctx from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral import expressions as expr from mistral.lang import parser as spec_parser from mistral import utils from mistral.utils import inspect_utils from mistral.workflow import states LOG = logging.getLogger(__name__) CONF = cfg.CONF class ContextView(dict): """Workflow context view. It's essentially an immutable composite structure providing fast lookup over multiple dictionaries w/o having to merge those dictionaries every time. The lookup algorithm simply iterates over provided dictionaries one by one and returns a value taken from the first dictionary where the provided key exists. This means that these dictionaries must be provided in the order of decreasing priorities. Note: Although this class extends built-in 'dict' it shouldn't be considered a normal dictionary because it may not implement all methods and account for all corner cases. It's only a read-only view. """ def __init__(self, *dicts): super(ContextView, self).__init__() self.dicts = dicts or [] def __getitem__(self, key): for d in self.dicts: if key in d: return d[key] raise KeyError(key) def get(self, key, default=None): for d in self.dicts: if key in d: return d[key] return default def __contains__(self, key): return any(key in d for d in self.dicts) def keys(self): keys = set() for d in self.dicts: keys.update(d.keys()) return keys def items(self): return [(k, self[k]) for k in self.keys()] def values(self): return [self[k] for k in self.keys()] def iteritems(self): # NOTE: This is for compatibility with Python 2.7 # YAQL converts output objects after they are evaluated # to basic types and it uses six.iteritems() internally # which calls d.items() in case of Python 2.7 and d.iteritems() # for Python 2.7 return iter(self.items()) def iterkeys(self): # NOTE: This is for compatibility with Python 2.7 # See the comment for iteritems(). return iter(self.keys()) def itervalues(self): # NOTE: This is for compatibility with Python 2.7 # See the comment for iteritems(). return iter(self.values()) def __len__(self): return len(self.keys()) @staticmethod def _raise_immutable_error(): raise exc.MistralError('Context view is immutable.') def __setitem__(self, key, value): self._raise_immutable_error() def update(self, E=None, **F): self._raise_immutable_error() def clear(self): self._raise_immutable_error() def pop(self, k, d=None): self._raise_immutable_error() def popitem(self): self._raise_immutable_error() def __delitem__(self, key): self._raise_immutable_error() def evaluate_upstream_context(upstream_task_execs): published_vars = {} ctx = {} for t_ex in upstream_task_execs: # TODO(rakhmerov): These two merges look confusing. So it's a # temporary solution. There's still the bug # https://bugs.launchpad.net/mistral/+bug/1424461 that needs to be # fixed using context variable versioning. published_vars = utils.merge_dicts( published_vars, t_ex.published ) utils.merge_dicts(ctx, evaluate_task_outbound_context(t_ex)) return utils.merge_dicts(ctx, published_vars) def _extract_execution_result(ex): if isinstance(ex, models.WorkflowExecution): return ex.output if ex.output: return ex.output['result'] def invalidate_task_execution_result(task_ex): for ex in task_ex.executions: ex.accepted = False def get_task_execution_result(task_ex): execs = task_ex.executions execs.sort( key=lambda x: x.runtime_context.get('index') ) results = [ _extract_execution_result(ex) for ex in execs if hasattr(ex, 'output') and ex.accepted ] task_spec = spec_parser.get_task_spec(task_ex.spec) if task_spec.get_with_items(): # TODO(rakhmerov): Smell: violation of 'with-items' encapsulation. with_items_ctx = task_ex.runtime_context.get('with_items') if with_items_ctx and with_items_ctx.get('count') > 0: return results else: return [] return results[0] if len(results) == 1 else results def publish_variables(task_ex, task_spec): if task_ex.state not in [states.SUCCESS, states.ERROR]: return wf_ex = task_ex.workflow_execution expr_ctx = ContextView(task_ex.in_context, wf_ex.context, wf_ex.input) if task_ex.name in expr_ctx: LOG.warning( 'Shadowing context variable with task name while ' 'publishing: %s', task_ex.name ) publish_spec = task_spec.get_publish(task_ex.state) if not publish_spec: return # Publish branch variables. branch_vars = publish_spec.get_branch() task_ex.published = expr.evaluate_recursively(branch_vars, expr_ctx) # Publish global variables. global_vars = publish_spec.get_global() utils.merge_dicts( task_ex.workflow_execution.context, expr.evaluate_recursively(global_vars, expr_ctx) ) # TODO(rakhmerov): # 1. Publish atomic variables. # 2. Add the field "publish" in TaskExecution model similar to "published" # but containing info as # {'branch': {vars}, 'global': {vars}, 'atomic': {vars}} def evaluate_task_outbound_context(task_ex): """Evaluates task outbound Data Flow context. This method assumes that complete task output (after publisher etc.) has already been evaluated. :param task_ex: DB task. :return: Outbound task Data Flow context. """ in_context = ( copy.deepcopy(dict(task_ex.in_context)) if task_ex.in_context is not None else {} ) return utils.update_dict(in_context, task_ex.published) def evaluate_workflow_output(wf_ex, wf_output, ctx): """Evaluates workflow output. :param wf_ex: Workflow execution. :param wf_output: Workflow output. :param ctx: Final Data Flow context (cause task's outbound context). """ # Evaluate workflow 'output' clause using the final workflow context. ctx_view = ContextView(ctx, wf_ex.context, wf_ex.input) output = expr.evaluate_recursively(wf_output, ctx_view) # TODO(rakhmerov): Many don't like that we return the whole context # if 'output' is not explicitly defined. return output or ctx def add_current_task_to_context(ctx, task_id, task_name): ctx['__task_execution'] = { 'id': task_id, 'name': task_name } return ctx def remove_internal_data_from_context(ctx): if '__task_execution' in ctx: del ctx['__task_execution'] def add_openstack_data_to_context(wf_ex): wf_ex.context = wf_ex.context or {} if CONF.pecan.auth_enable: exec_ctx = auth_ctx.ctx() if exec_ctx: wf_ex.context.update({'openstack': exec_ctx.to_dict()}) def add_execution_to_context(wf_ex): wf_ex.context = wf_ex.context or {} wf_ex.context['__execution'] = {'id': wf_ex.id} def add_environment_to_context(wf_ex): # TODO(rakhmerov): This is redundant, we can always get env from WF params wf_ex.context = wf_ex.context or {} # If env variables are provided, add an evaluated copy into the context. if 'env' in wf_ex.params: env = copy.deepcopy(wf_ex.params['env']) if ('evaluate_env' in wf_ex.params and not wf_ex.params['evaluate_env']): wf_ex.context['__env'] = env else: wf_ex.context['__env'] = expr.evaluate_recursively( env, {'__env': env} ) def add_workflow_variables_to_context(wf_ex, wf_spec): wf_ex.context = wf_ex.context or {} # The context for calculating workflow variables is workflow input # and other data already stored in workflow initial context. ctx_view = ContextView(wf_ex.context, wf_ex.input) wf_vars = expr.evaluate_recursively(wf_spec.get_vars(), ctx_view) utils.merge_dicts(wf_ex.context, wf_vars) def evaluate_object_fields(obj, context): fields = inspect_utils.get_public_fields(obj) evaluated_fields = expr.evaluate_recursively(fields, context) for k, v in evaluated_fields.items(): setattr(obj, k, v) mistral-6.0.0/mistral/workflow/states.py0000666000175100017510000000414613245513262020430 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Valid task and workflow states.""" IDLE = 'IDLE' WAITING = 'WAITING' RUNNING = 'RUNNING' RUNNING_DELAYED = 'DELAYED' PAUSED = 'PAUSED' SUCCESS = 'SUCCESS' CANCELLED = 'CANCELLED' ERROR = 'ERROR' _ALL = [ IDLE, WAITING, RUNNING, RUNNING_DELAYED, PAUSED, SUCCESS, CANCELLED, ERROR ] _VALID_TRANSITIONS = { IDLE: [RUNNING, ERROR, CANCELLED], WAITING: [RUNNING], RUNNING: [PAUSED, RUNNING_DELAYED, SUCCESS, ERROR, CANCELLED], RUNNING_DELAYED: [RUNNING, ERROR, CANCELLED], PAUSED: [RUNNING, ERROR, CANCELLED], SUCCESS: [], CANCELLED: [RUNNING], ERROR: [RUNNING] } def is_valid(state): return state in _ALL def is_invalid(state): return not is_valid(state) def is_completed(state): return state in [SUCCESS, ERROR, CANCELLED] def is_cancelled(state): return state == CANCELLED def is_running(state): return state in [RUNNING, RUNNING_DELAYED] def is_waiting(state): return state == WAITING def is_idle(state): return state == IDLE def is_paused(state): return state == PAUSED def is_paused_or_completed(state): return is_paused(state) or is_completed(state) def is_paused_or_idle(state): return is_paused(state) or is_idle(state) def is_valid_transition(from_state, to_state): if is_invalid(from_state) or is_invalid(to_state): return False if from_state == to_state: return True return to_state in _VALID_TRANSITIONS[from_state] mistral-6.0.0/mistral/workflow/base.py0000666000175100017510000001766213245513262020046 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2015 - Huawei Technologies Co. Ltd # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc from oslo_log import log as logging from osprofiler import profiler from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral import utils as u from mistral.workflow import commands from mistral.workflow import data_flow from mistral.workflow import lookup_utils from mistral.workflow import states LOG = logging.getLogger(__name__) @profiler.trace('wf-controller-get-controller', hide_args=True) def get_controller(wf_ex, wf_spec=None): """Gets a workflow controller instance by given workflow execution object. :param wf_ex: Workflow execution object. :param wf_spec: Workflow specification object. If passed, the method works faster. :returns: Workflow controller class. """ if not wf_spec: wf_spec = spec_parser.get_workflow_spec_by_execution_id(wf_ex.id) wf_type = wf_spec.get_type() ctrl_cls = None for cls in u.iter_subclasses(WorkflowController): if cls.__workflow_type__ == wf_type: ctrl_cls = cls break if not ctrl_cls: raise exc.MistralError( 'Failed to find a workflow controller [type=%s]' % wf_type ) return ctrl_cls(wf_ex, wf_spec) class TaskLogicalState(object): """Task logical state. This data structure describes what state a task should have according to the logic of the workflow type and state of other tasks. """ def __init__(self, state, state_info=None, cardinality=0, triggered_by=None): self.state = state self.state_info = state_info self.cardinality = cardinality self.triggered_by = triggered_by or [] def get_state(self): return self.state def get_state_info(self): return self.state_info def get_cardinality(self): return self.cardinality def get_triggered_by(self): return self.get_triggered_by class WorkflowController(object): """Workflow Controller base class. Different workflow controllers implement different workflow algorithms. In practice it may actually mean that there may be multiple ways of describing workflow models (and even languages) that will be supported by Mistral. """ def __init__(self, wf_ex, wf_spec=None): """Creates a new workflow controller. :param wf_ex: Workflow execution. :param wf_spec: Workflow specification. """ self.wf_ex = wf_ex if wf_spec is None: wf_spec = spec_parser.get_workflow_spec_by_execution_id(wf_ex.id) self.wf_spec = wf_spec @profiler.trace('workflow-controller-continue-workflow', hide_args=True) def continue_workflow(self, task_ex=None): """Calculates a list of commands to continue the workflow. Given a workflow specification this method makes required analysis according to this workflow type rules and identifies a list of commands needed to continue the workflow. :param task_ex: Task execution that caused workflow continuation. Optional. If not specified, it means that no certain task caused this operation (e.g. workflow has been just started or resumed manually). :return: List of workflow commands (instances of mistral.workflow.commands.WorkflowCommand). """ if self._is_paused_or_completed(): return [] return self._find_next_commands(task_ex) def rerun_tasks(self, task_execs, reset=True): """Gets commands to rerun existing task executions. :param task_execs: List of task executions. :param reset: If true, then purge action executions for the tasks. :return: List of workflow commands. """ if self._is_paused_or_completed(): return [] cmds = [ commands.RunExistingTask(self.wf_ex, self.wf_spec, t_e, reset) for t_e in task_execs ] LOG.debug("Commands to rerun workflow tasks: %s", cmds) return cmds @abc.abstractmethod def get_logical_task_state(self, task_ex): """Determines a logical state of the given task. :param task_ex: Task execution. :return: Tuple (state, state_info, cardinality) where 'state' and 'state_info' are the corresponding values which the given task should have according to workflow rules and current states of other tasks. 'cardinality' gives the estimation on the number of preconditions that are not yet met in case if state is WAITING. This number can be used to estimate how frequently we can refresh the state of this task. """ raise NotImplementedError @abc.abstractmethod def is_error_handled_for(self, task_ex): """Determines if error is handled for specific task. :param task_ex: Task execution. :return: True if either there is no error at all or error is considered handled. """ raise NotImplementedError @abc.abstractmethod def all_errors_handled(self): """Determines if all errors (if any) are handled. :return: True if either there aren't errors at all or all errors are considered handled. """ raise NotImplementedError def any_cancels(self): """Determines if there are any task cancellations. :return: True if there is one or more tasks in cancelled state. """ t_execs = lookup_utils.find_cancelled_task_executions(self.wf_ex.id) return len(t_execs) > 0 @abc.abstractmethod def evaluate_workflow_final_context(self): """Evaluates final workflow context assuming that workflow has finished. :return: Final workflow context. """ raise NotImplementedError def get_task_inbound_context(self, task_spec): # TODO(rakhmerov): This method should also be able to work with task_ex # to cover 'split' (aka 'merge') use case. upstream_task_execs = self._get_upstream_task_executions(task_spec) return data_flow.evaluate_upstream_context(upstream_task_execs) @abc.abstractmethod def _get_upstream_task_executions(self, task_spec): """Gets workflow upstream tasks for the given task. :param task_spec: Task specification. :return: List of upstream task executions for the given task spec. """ raise NotImplementedError @abc.abstractmethod def _find_next_commands(self, task_ex): """Finds commands that should run next. A concrete algorithm of finding such tasks depends on a concrete workflow controller. :return: List of workflow commands. """ # If task execution was passed then we should make all calculations # only based on it. if task_ex: return [] # Add all tasks in IDLE state. idle_tasks = lookup_utils.find_task_executions_with_state( self.wf_ex.id, states.IDLE ) return [ commands.RunExistingTask(self.wf_ex, self.wf_spec, t) for t in idle_tasks ] def _is_paused_or_completed(self): return states.is_paused_or_completed(self.wf_ex.state) mistral-6.0.0/mistral/workflow/direct_workflow.py0000666000175100017510000004105713245513262022333 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from osprofiler import profiler from mistral import exceptions as exc from mistral import expressions as expr from mistral import utils from mistral.workflow import base from mistral.workflow import commands from mistral.workflow import data_flow from mistral.workflow import lookup_utils from mistral.workflow import states LOG = logging.getLogger(__name__) class DirectWorkflowController(base.WorkflowController): """'Direct workflow' handler. This handler implements the workflow pattern which is based on direct transitions between tasks, i.e. after each task completion a decision should be made which tasks should run next based on result of task execution. Note, that tasks can run in parallel. For example, if there's a workflow consisting of three tasks 'A', 'B' and 'C' where 'A' starts first then 'B' and 'C' can start second if certain associated with transition 'A'->'B' and 'A'->'C' evaluate to true. """ __workflow_type__ = "direct" def _get_upstream_task_executions(self, task_spec): return list( filter( lambda t_e: self._is_upstream_task_execution(task_spec, t_e), lookup_utils.find_task_executions_by_specs( self.wf_ex.id, self.wf_spec.find_inbound_task_specs(task_spec) ) ) ) def _is_upstream_task_execution(self, t_spec, t_ex_candidate): if not states.is_completed(t_ex_candidate.state): return False if not t_spec.get_join(): return t_ex_candidate.processed induced_state, _, _ = self._get_induced_join_state( self.wf_spec.get_tasks()[t_ex_candidate.name], self._find_task_execution_by_name(t_ex_candidate.name), t_spec ) return induced_state == states.RUNNING def _find_next_commands(self, task_ex=None): cmds = super(DirectWorkflowController, self)._find_next_commands( task_ex ) # Checking if task_ex is empty here is a serious optimization here # because 'self.wf_ex.task_executions' leads to initialization of # the entire collection which in case of highly-parallel workflows # may be very expensive. if not task_ex and not self.wf_ex.task_executions: return self._find_start_commands() if task_ex: task_execs = [task_ex] else: task_execs = [ t_ex for t_ex in self.wf_ex.task_executions if states.is_completed(t_ex.state) and not t_ex.processed ] for t_ex in task_execs: cmds.extend(self._find_next_commands_for_task(t_ex)) return cmds def _find_start_commands(self): return [ commands.RunTask( self.wf_ex, self.wf_spec, t_s, self.get_task_inbound_context(t_s) ) for t_s in self.wf_spec.find_start_tasks() ] def _find_next_commands_for_task(self, task_ex): """Finds next commands based on the state of the given task. :param task_ex: Task execution for which next commands need to be found. :return: List of workflow commands. """ cmds = [] ctx = data_flow.evaluate_task_outbound_context(task_ex) for t_n, params, event_name in self._find_next_tasks(task_ex, ctx=ctx): t_s = self.wf_spec.get_tasks()[t_n] if not (t_s or t_n in commands.RESERVED_CMDS): raise exc.WorkflowException("Task '%s' not found." % t_n) elif not t_s: t_s = self.wf_spec.get_tasks()[task_ex.name] data_flow.remove_internal_data_from_context(ctx) triggered_by = [ { 'task_id': task_ex.id, 'event': event_name } ] cmd = commands.create_command( t_n, self.wf_ex, self.wf_spec, t_s, ctx, params=params, triggered_by=triggered_by ) self._configure_if_join(cmd) cmds.append(cmd) LOG.debug("Found commands: %s", cmds) return cmds def _configure_if_join(self, cmd): if not isinstance(cmd, commands.RunTask): return if not cmd.task_spec.get_join(): return cmd.unique_key = self._get_join_unique_key(cmd) cmd.wait = True def _get_join_unique_key(self, cmd): return 'join-task-%s-%s' % (self.wf_ex.id, cmd.task_spec.get_name()) # TODO(rakhmerov): Need to refactor this method to be able to pass tasks # whose contexts need to be merged. def evaluate_workflow_final_context(self): ctx = {} for t_ex in self._find_end_task_executions(): ctx = utils.merge_dicts( ctx, data_flow.evaluate_task_outbound_context(t_ex) ) data_flow.remove_internal_data_from_context(ctx) return ctx def get_logical_task_state(self, task_ex): task_spec = self.wf_spec.get_tasks()[task_ex.name] if not task_spec.get_join(): # A simple 'non-join' task does not have any preconditions # based on state of other tasks so its logical state always # equals to its real state. return base.TaskLogicalState(task_ex.state, task_ex.state_info) return self._get_join_logical_state(task_spec) def is_error_handled_for(self, task_ex): return bool(self.wf_spec.get_on_error_clause(task_ex.name)) def all_errors_handled(self): for t_ex in lookup_utils.find_error_task_executions(self.wf_ex.id): ctx_view = data_flow.ContextView( data_flow.evaluate_task_outbound_context(t_ex), self.wf_ex.context, self.wf_ex.input ) tasks_on_error = self._find_next_tasks_for_clause( self.wf_spec.get_on_error_clause(t_ex.name), ctx_view ) if not tasks_on_error: return False return True def _find_end_task_executions(self): def is_end_task(t_ex): try: return not self._has_outbound_tasks(t_ex) except exc.MistralException: # If some error happened during the evaluation of outbound # tasks we consider that the given task is an end task. # Due to this output-on-error could reach the outbound context # of given task also. return True return list( filter( is_end_task, lookup_utils.find_completed_task_executions(self.wf_ex.id) ) ) def _has_outbound_tasks(self, task_ex): # In order to determine if there are outbound tasks we just need # to calculate next task names (based on task outbound context) # and remove all engine commands. To do the latter it's enough to # check if there's a corresponding task specification for a task name. return bool([ t_name for t_name in self._find_next_task_names(task_ex) if self.wf_spec.get_tasks()[t_name] ]) def _find_next_task_names(self, task_ex): return [t[0] for t in self._find_next_tasks(task_ex)] def _find_next_tasks(self, task_ex, ctx=None): t_state = task_ex.state t_name = task_ex.name ctx_view = data_flow.ContextView( ctx or data_flow.evaluate_task_outbound_context(task_ex), self.wf_ex.context, self.wf_ex.input ) # [(task_name, params, 'on-success'|'on-error'|'on-complete'), ...] result = [] def process_clause(clause, event_name): task_tuples = self._find_next_tasks_for_clause(clause, ctx_view) for t in task_tuples: result.append((t[0], t[1], event_name)) if t_state == states.SUCCESS: process_clause( self.wf_spec.get_on_success_clause(t_name), 'on-success' ) elif t_state == states.ERROR: process_clause( self.wf_spec.get_on_error_clause(t_name), 'on-error' ) if states.is_completed(t_state) and not states.is_cancelled(t_state): process_clause( self.wf_spec.get_on_complete_clause(t_name), 'on-complete' ) return result @staticmethod def _find_next_tasks_for_clause(clause, ctx): """Finds next tasks names. This method finds next task(command) base on given {name: condition} dictionary. :param clause: Tuple (task_name, condition, parameters) taken from 'on-complete', 'on-success' or 'on-error' clause. :param ctx: Context that clause expressions should be evaluated against of. :return: List of task(command) names. """ if not clause: return [] return [ (t_name, expr.evaluate_recursively(params, ctx)) for t_name, condition, params in clause if not condition or expr.evaluate(condition, ctx) ] @profiler.trace('direct-wf-controller-get-join-logical-state') def _get_join_logical_state(self, task_spec): """Evaluates logical state of 'join' task. :param task_spec: 'join' task specification. :return: TaskLogicalState (state, state_info, cardinality, triggered_by) where 'state' and 'state_info' describe the logical state of the given 'join' task and 'cardinality' gives the remaining number of unfulfilled preconditions. If logical state is not WAITING then 'cardinality' should always be 0. """ # TODO(rakhmerov): We need to use task_ex instead of task_spec # in order to cover a use case when there's more than one instance # of the same 'join' task in a workflow. # TODO(rakhmerov): In some cases this method will be expensive because # it uses a multi-step recursive search. We need to optimize it moving # forward (e.g. with Workflow Execution Graph). join_expr = task_spec.get_join() in_task_specs = self.wf_spec.find_inbound_task_specs(task_spec) if not in_task_specs: return base.TaskLogicalState(states.RUNNING) # List of tuples (task_name, task_ex, state, depth, event_name). induced_states = [] for t_s in in_task_specs: t_ex = self._find_task_execution_by_name(t_s.get_name()) tup = self._get_induced_join_state(t_s, t_ex, task_spec) induced_states.append( ( t_s.get_name(), t_ex, tup[0], tup[1], tup[2] ) ) def count(state): cnt = 0 total_depth = 0 for s in induced_states: if s[2] == state: cnt += 1 total_depth += s[3] return cnt, total_depth errors_tuple = count(states.ERROR) runnings_tuple = count(states.RUNNING) total_count = len(induced_states) def _blocked_message(): return ( 'Blocked by tasks: %s' % [s[0] for s in induced_states if s[2] == states.WAITING] ) def _failed_message(): return ( 'Failed by tasks: %s' % [s[0] for s in induced_states if s[2] == states.ERROR] ) def _triggered_by(state): return [ {'task_id': s[1].id, 'event': s[4]} for s in induced_states if s[2] == state and s[1] is not None ] # If "join" is configured as a number or 'one'. if isinstance(join_expr, int) or join_expr == 'one': spec_cardinality = 1 if join_expr == 'one' else join_expr if runnings_tuple[0] >= spec_cardinality: return base.TaskLogicalState( states.RUNNING, triggered_by=_triggered_by(states.RUNNING) ) # E.g. 'join: 3' with inbound [ERROR, ERROR, RUNNING, WAITING] # No chance to get 3 RUNNING states. if errors_tuple[0] > (total_count - spec_cardinality): return base.TaskLogicalState(states.ERROR, _failed_message()) # Calculate how many tasks need to finish to trigger this 'join'. cardinality = spec_cardinality - runnings_tuple[0] return base.TaskLogicalState( states.WAITING, _blocked_message(), cardinality=cardinality ) if join_expr == 'all': if total_count == runnings_tuple[0]: return base.TaskLogicalState( states.RUNNING, triggered_by=_triggered_by(states.RUNNING) ) if errors_tuple[0] > 0: return base.TaskLogicalState( states.ERROR, _failed_message(), triggered_by=_triggered_by(states.ERROR) ) # Remaining cardinality is just a difference between all tasks and # a number of those tasks that induce RUNNING state. cardinality = total_count - runnings_tuple[1] return base.TaskLogicalState( states.WAITING, _blocked_message(), cardinality=cardinality ) raise RuntimeError('Unexpected join expression: %s' % join_expr) # TODO(rakhmerov): Method signature is incorrect given that # we may have multiple task executions for a task. It should # accept inbound task execution rather than a spec. def _get_induced_join_state(self, in_task_spec, in_task_ex, join_task_spec): join_task_name = join_task_spec.get_name() if not in_task_ex: possible, depth = self._possible_route(in_task_spec) if possible: return states.WAITING, depth, None else: return states.ERROR, depth, 'impossible route' if not states.is_completed(in_task_ex.state): return states.WAITING, 1, None # [(task name, params, event name), ...] next_tasks_tuples = self._find_next_tasks(in_task_ex) next_tasks_dict = {tup[0]: tup[2] for tup in next_tasks_tuples} if join_task_name not in next_tasks_dict: return states.ERROR, 1, "not triggered" return states.RUNNING, 1, next_tasks_dict[join_task_name] def _find_task_execution_by_name(self, t_name): # Note: in case of 'join' completion check it's better to initialize # the entire task_executions collection to avoid too many DB queries. t_execs = lookup_utils.find_task_executions_by_name( self.wf_ex.id, t_name ) # TODO(rakhmerov): Temporary hack. See the previous comment. return t_execs[-1] if t_execs else None def _possible_route(self, task_spec, depth=1): in_task_specs = self.wf_spec.find_inbound_task_specs(task_spec) if not in_task_specs: return True, depth for t_s in in_task_specs: t_ex = self._find_task_execution_by_name(t_s.get_name()) if not t_ex: possible, depth = self._possible_route(t_s, depth + 1) if possible: return True, depth else: t_name = task_spec.get_name() if (not states.is_completed(t_ex.state) or t_name in self._find_next_task_names(t_ex)): return True, depth return False, depth mistral-6.0.0/mistral/workflow/__init__.py0000666000175100017510000000000013245513262020645 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/workflow/utils.py0000666000175100017510000000150213245513262020256 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral_lib.actions import types # For backwards compatibility Result = types.Result ResultSerializer = types.ResultSerializer mistral-6.0.0/mistral/workflow/commands.py0000666000175100017510000001257213245513262020730 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.lang import parser as spec_parser from mistral.lang.v2 import tasks from mistral.workflow import states class WorkflowCommand(object): """Workflow command. A set of workflow commands form a communication protocol between workflow controller and its clients. When workflow controller makes a decision about how to continue a workflow it returns a set of commands so that a caller knows what to do next. """ def __init__(self, wf_ex, wf_spec, task_spec, ctx, triggered_by=None): self.wf_ex = wf_ex self.wf_spec = wf_spec self.task_spec = task_spec self.ctx = ctx or {} self.triggered_by = triggered_by class Noop(WorkflowCommand): """No-operation command.""" def __repr__(self): return "NOOP [workflow=%s]" % self.wf_ex.name class RunTask(WorkflowCommand): """Instruction to run a workflow task.""" def __init__(self, wf_ex, wf_spec, task_spec, ctx, triggered_by=None): super(RunTask, self).__init__( wf_ex, wf_spec, task_spec, ctx, triggered_by=triggered_by ) self.wait = False self.unique_key = None def is_waiting(self): return self.wait def get_unique_key(self): return self.unique_key def __repr__(self): return ( "Run task [workflow=%s, task=%s, waif_flag=%s, triggered_by=%s]" % ( self.wf_ex.name, self.task_spec.get_name(), self.wait, self.triggered_by ) ) class RunExistingTask(WorkflowCommand): """Command for running already existent task.""" def __init__(self, wf_ex, wf_spec, task_ex, reset=True, triggered_by=None): super(RunExistingTask, self).__init__( wf_ex, wf_spec, spec_parser.get_task_spec(task_ex.spec), task_ex.in_context, triggered_by=triggered_by ) self.task_ex = task_ex self.reset = reset self.unique_key = task_ex.unique_key class SetWorkflowState(WorkflowCommand): """Instruction to change a workflow state.""" def __init__(self, wf_ex, wf_spec, task_spec, ctx, new_state, msg=None, triggered_by=None): super(SetWorkflowState, self).__init__( wf_ex, wf_spec, task_spec, ctx, triggered_by=triggered_by ) self.new_state = new_state self.msg = msg class FailWorkflow(SetWorkflowState): """Instruction to fail a workflow.""" def __init__(self, wf_ex, wf_spec, task_spec, ctx, msg=None, triggered_by=None): super(FailWorkflow, self).__init__( wf_ex, wf_spec, task_spec, ctx, states.ERROR, msg=msg, triggered_by=triggered_by ) def __repr__(self): return "Fail [workflow=%s]" % self.wf_ex.name class SucceedWorkflow(SetWorkflowState): """Instruction to succeed a workflow.""" def __init__(self, wf_ex, wf_spec, task_spec, ctx, msg=None, triggered_by=None): super(SucceedWorkflow, self).__init__( wf_ex, wf_spec, task_spec, ctx, states.SUCCESS, msg=msg, triggered_by=triggered_by ) def __repr__(self): return "Succeed [workflow=%s]" % self.wf_ex.name class PauseWorkflow(SetWorkflowState): """Instruction to pause a workflow.""" def __init__(self, wf_ex, wf_spec, task_spec, ctx, msg=None, triggered_by=None): super(PauseWorkflow, self).__init__( wf_ex, wf_spec, task_spec, ctx, states.PAUSED, msg=msg, triggered_by=triggered_by ) def __repr__(self): return "Pause [workflow=%s]" % self.wf_ex.name RESERVED_CMDS = dict(zip( tasks.RESERVED_TASK_NAMES, [ Noop, FailWorkflow, SucceedWorkflow, PauseWorkflow ] )) def get_command_class(cmd_name): return RESERVED_CMDS[cmd_name] if cmd_name in RESERVED_CMDS else None def create_command(cmd_name, wf_ex, wf_spec, task_spec, ctx, params=None, triggered_by=None): cmd_cls = get_command_class(cmd_name) or RunTask if issubclass(cmd_cls, SetWorkflowState): return cmd_cls( wf_ex, wf_spec, task_spec, ctx, msg=params.get('msg'), triggered_by=triggered_by ) else: return cmd_cls( wf_ex, wf_spec, task_spec, ctx, triggered_by=triggered_by ) mistral-6.0.0/mistral/hacking/0000775000175100017510000000000013245513604016276 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/hacking/__init__.py0000666000175100017510000000000013245513261020376 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/hacking/checks.py0000666000175100017510000002325213245513261020115 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Mistral's pep8 extensions. In order to make the review process faster and easier for core devs we are adding some Mistral specific pep8 checks. This will catch common errors. There are two types of pep8 extensions. One is a function that takes either a physical or logical line. The physical or logical line is the first param in the function definition and can be followed by other parameters supported by pep8. The second type is a class that parses AST trees. For more info please see pep8.py. """ import ast import re import six oslo_namespace_imports_dot = re.compile(r"import[\s]+oslo[.][^\s]+") oslo_namespace_imports_from_dot = re.compile(r"from[\s]+oslo[.]") oslo_namespace_imports_from_root = re.compile(r"from[\s]+oslo[\s]+import[\s]+") def no_assert_equal_true_false(logical_line): """Check for assertTrue/assertFalse sentences M319 """ _start_re = re.compile(r'assert(Not)?Equal\((True|False),') _end_re = re.compile(r'assert(Not)?Equal\(.*,\s+(True|False)\)$') if _start_re.search(logical_line) or _end_re.search(logical_line): yield (0, "M319: assertEqual(A, True|False), " "assertEqual(True|False, A), assertNotEqual(A, True|False), " "or assertEqual(True|False, A) sentences must not be used. " "Use assertTrue(A) or assertFalse(A) instead") def no_assert_true_false_is_not(logical_line): """Check for assertIs/assertIsNot sentences M320 """ _re = re.compile(r'assert(True|False)\(.+\s+is\s+(not\s+)?.+\)$') if _re.search(logical_line): yield (0, "M320: assertTrue(A is|is not B) or " "assertFalse(A is|is not B) sentences must not be used. " "Use assertIs(A, B) or assertIsNot(A, B) instead") def check_oslo_namespace_imports(logical_line): if re.match(oslo_namespace_imports_from_dot, logical_line): msg = ("O323: '%s' must be used instead of '%s'.") % ( logical_line.replace('oslo.', 'oslo_'), logical_line) yield(0, msg) elif re.match(oslo_namespace_imports_from_root, logical_line): msg = ("O323: '%s' must be used instead of '%s'.") % ( logical_line.replace('from oslo import ', 'import oslo_'), logical_line) yield(0, msg) elif re.match(oslo_namespace_imports_dot, logical_line): msg = ("O323: '%s' must be used instead of '%s'.") % ( logical_line.replace('import', 'from').replace('.', ' import '), logical_line) yield(0, msg) def check_python3_xrange(logical_line): if re.search(r"\bxrange\s*\(", logical_line): yield(0, "M327: Do not use xrange(). 'xrange()' is not compatible " "with Python 3. Use range() or six.moves.range() instead.") def check_python3_no_iteritems(logical_line): msg = ("M328: Use six.iteritems() instead of dict.iteritems().") if re.search(r".*\.iteritems\(\)", logical_line): yield(0, msg) def check_python3_no_iterkeys(logical_line): msg = ("M329: Use six.iterkeys() instead of dict.iterkeys().") if re.search(r".*\.iterkeys\(\)", logical_line): yield(0, msg) def check_python3_no_itervalues(logical_line): msg = ("M330: Use six.itervalues() instead of dict.itervalues().") if re.search(r".*\.itervalues\(\)", logical_line): yield(0, msg) class BaseASTChecker(ast.NodeVisitor): """Provides a simple framework for writing AST-based checks. Subclasses should implement visit_* methods like any other AST visitor implementation. When they detect an error for a particular node the method should call ``self.add_error(offending_node)``. Details about where in the code the error occurred will be pulled from the node object. Subclasses should also provide a class variable named CHECK_DESC to be used for the human readable error message. """ def __init__(self, tree, filename): """This object is created automatically by pep8. :param tree: an AST tree :param filename: name of the file being analyzed (ignored by our checks) """ self._tree = tree self._errors = [] def run(self): """Called automatically by pep8.""" self.visit(self._tree) return self._errors def add_error(self, node, message=None): """Add an error caused by a node to the list of errors for pep8.""" message = message or self.CHECK_DESC error = (node.lineno, node.col_offset, message, self.__class__) self._errors.append(error) class CheckForLoggingIssues(BaseASTChecker): CHECK_DESC = ('M001 Using the deprecated Logger.warn') LOG_MODULES = ('logging', 'oslo_log.log') def __init__(self, tree, filename): super(CheckForLoggingIssues, self).__init__(tree, filename) self.logger_names = [] self.logger_module_names = [] # NOTE(dstanek): This kinda accounts for scopes when talking # about only leaf node in the graph. self.assignments = {} def _filter_imports(self, module_name, alias): """Keeps lists of logging imports.""" if module_name in self.LOG_MODULES: self.logger_module_names.append(alias.asname or alias.name) def visit_Import(self, node): for alias in node.names: self._filter_imports(alias.name, alias) return super(CheckForLoggingIssues, self).generic_visit(node) def visit_ImportFrom(self, node): for alias in node.names: full_name = '%s.%s' % (node.module, alias.name) self._filter_imports(full_name, alias) return super(CheckForLoggingIssues, self).generic_visit(node) def _find_name(self, node): """Return the fully qualified name or a Name or a Attribute.""" if isinstance(node, ast.Name): return node.id elif (isinstance(node, ast.Attribute) and isinstance(node.value, (ast.Name, ast.Attribute))): obj_name = self._find_name(node.value) if obj_name is None: return None method_name = node.attr return obj_name + '.' + method_name elif isinstance(node, six.string_types): return node else: # Could be Subscript, Call or many more return None def visit_Assign(self, node): """Look for 'LOG = logging.getLogger' This handles the simple case: name = [logging_module].getLogger(...) """ attr_node_types = (ast.Name, ast.Attribute) if (len(node.targets) != 1 or not isinstance(node.targets[0], attr_node_types)): # Say no to: "x, y = ..." return super(CheckForLoggingIssues, self).generic_visit(node) target_name = self._find_name(node.targets[0]) if (isinstance(node.value, ast.BinOp) and isinstance(node.value.op, ast.Mod)): if (isinstance(node.value.left, ast.Call) and isinstance(node.value.left.func, ast.Name)): # NOTE(dstanek): This is done to match cases like: # `msg = _('something %s') % x` node = ast.Assign(value=node.value.left) if not isinstance(node.value, ast.Call): # node.value must be a call to getLogger self.assignments.pop(target_name, None) return super(CheckForLoggingIssues, self).generic_visit(node) if (not isinstance(node.value.func, ast.Attribute) or not isinstance(node.value.func.value, attr_node_types)): # Function must be an attribute on an object like # logging.getLogger return super(CheckForLoggingIssues, self).generic_visit(node) object_name = self._find_name(node.value.func.value) func_name = node.value.func.attr if (object_name in self.logger_module_names and func_name == 'getLogger'): self.logger_names.append(target_name) return super(CheckForLoggingIssues, self).generic_visit(node) def visit_Call(self, node): """Look for the 'LOG.*' calls.""" # obj.method if isinstance(node.func, ast.Attribute): obj_name = self._find_name(node.func.value) if isinstance(node.func.value, ast.Name): method_name = node.func.attr elif isinstance(node.func.value, ast.Attribute): obj_name = self._find_name(node.func.value) method_name = node.func.attr else: # Could be Subscript, Call or many more return super(CheckForLoggingIssues, self).generic_visit(node) # If dealing with a logger the method can't be "warn". if obj_name in self.logger_names and method_name == 'warn': self.add_error(node.args[0]) return super(CheckForLoggingIssues, self).generic_visit(node) def factory(register): register(no_assert_equal_true_false) register(no_assert_true_false_is_not) register(check_oslo_namespace_imports) register(CheckForLoggingIssues) register(check_python3_no_iteritems) register(check_python3_no_iterkeys) register(check_python3_no_itervalues) register(check_python3_xrange) mistral-6.0.0/mistral/service/0000775000175100017510000000000013245513604016332 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/service/coordination.py0000666000175100017510000001361613245513261021404 0ustar zuulzuul00000000000000# Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import six from oslo_concurrency import lockutils from oslo_config import cfg from oslo_log import log from oslo_service import threadgroup import tenacity import tooz.coordination from mistral import utils LOG = log.getLogger(__name__) _SERVICE_COORDINATOR = None class ServiceCoordinator(object): """Service coordinator. This class uses the `tooz` library to manage group membership. To ensure that the other agents know this agent is still alive, the `heartbeat` method should be called periodically. """ def __init__(self, my_id=None): self._coordinator = None self._my_id = my_id or utils.get_process_identifier() self._started = False def start(self): backend_url = cfg.CONF.coordination.backend_url if backend_url: try: self._coordinator = tooz.coordination.get_coordinator( backend_url, self._my_id ) self._coordinator.start() self._started = True LOG.info('Coordination backend started successfully.') except tooz.coordination.ToozError as e: self._started = False LOG.exception('Error connecting to coordination backend. ' '%s', six.text_type(e)) def stop(self): if not self.is_active(): return try: self._coordinator.stop() except tooz.coordination.ToozError: LOG.warning('Error connecting to coordination backend.') finally: self._coordinator = None self._started = False def is_active(self): return self._coordinator and self._started def heartbeat(self): if not self.is_active(): # Re-connect. self.start() if not self.is_active(): LOG.debug("Coordination backend didn't start.") return try: self._coordinator.heartbeat() except tooz.coordination.ToozError as e: LOG.exception('Error sending a heartbeat to coordination ' 'backend. %s', six.text_type(e)) self._started = False @tenacity.retry(stop=tenacity.stop_after_attempt(5)) def join_group(self, group_id): if not self.is_active() or not group_id: return try: join_req = self._coordinator.join_group(group_id) join_req.get() LOG.info( 'Joined service group:%s, member:%s', group_id, self._my_id ) return except tooz.coordination.MemberAlreadyExist: return except tooz.coordination.GroupNotCreated as e: create_grp_req = self._coordinator.create_group(group_id) try: create_grp_req.get() except tooz.coordination.GroupAlreadyExist: pass # Re-raise exception to join group again. raise e def leave_group(self, group_id): if self.is_active(): self._coordinator.leave_group(group_id) LOG.info( 'Left service group:%s, member:%s', group_id, self._my_id ) def get_members(self, group_id): """Gets members of coordination group. ToozError exception must be handled when this function is invoded, we leave it to the invoker for the handling decision. """ if not self.is_active(): return [] get_members_req = self._coordinator.get_members(group_id) try: members = get_members_req.get() LOG.debug('Members of group %s: %s', group_id, members) return members except tooz.coordination.GroupNotCreated: LOG.warning('Group %s does not exist.', group_id) return [] def cleanup_service_coordinator(): """Intends to be used by tests to recreate service coordinator.""" global _SERVICE_COORDINATOR _SERVICE_COORDINATOR = None def get_service_coordinator(my_id=None): global _SERVICE_COORDINATOR if not _SERVICE_COORDINATOR: _SERVICE_COORDINATOR = ServiceCoordinator(my_id=my_id) _SERVICE_COORDINATOR.start() return _SERVICE_COORDINATOR class Service(object): def __init__(self, group_type): self.group_type = group_type self._tg = None @lockutils.synchronized('service_coordinator') def register_membership(self): """Registers group membership. Because this method will be invoked on each service startup almost at the same time, so it must be synchronized, in case all the services are started within same process. """ service_coordinator = get_service_coordinator() if service_coordinator.is_active(): service_coordinator.join_group(self.group_type) self._tg = threadgroup.ThreadGroup() self._tg.add_timer( cfg.CONF.coordination.heartbeat_interval, service_coordinator.heartbeat ) def stop(self): service_coordinator = get_service_coordinator() if service_coordinator.is_active(): self._tg.stop() service_coordinator.stop() mistral-6.0.0/mistral/service/base.py0000666000175100017510000000374513245513261017630 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from eventlet import event from oslo_service import service from mistral.service import coordination class MistralService(service.Service): """Base class for Mistral services. The term 'service' here means any Mistral component that can run as an independent process and thus can be registered as a cluster member. """ def __init__(self, cluster_group, setup_profiler=True): super(MistralService, self).__init__() self.cluster_member = coordination.Service(cluster_group) self._setup_profiler = setup_profiler self._started = event.Event() def wait_started(self): """Wait until the service is fully started.""" self._started.wait() def _notify_started(self, message): print(message) self._started.send() def start(self): super(MistralService, self).start() self.cluster_member.register_membership() def stop(self, graceful=False): super(MistralService, self).stop(graceful) self._started = event.Event() # TODO(rakhmerov): Probably we could also take care of an RPC server # if it exists for this particular service type. Take a look at # executor and engine servers. # TODO(rakhmerov): This method is not implemented correctly now # (not thread-safe). Uncomment this call once it's fixed. # self.cluster_member.stop() mistral-6.0.0/mistral/service/__init__.py0000666000175100017510000000000013245513261020432 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/serialization.py0000666000175100017510000001045613245513261020130 0ustar zuulzuul00000000000000# Copyright 2017 Nokia Networks. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mistral_lib import serialization as ml_serialization from oslo_serialization import jsonutils _SERIALIZER = None # Backwards compatibility Serializer = ml_serialization.Serializer DictBasedSerializer = ml_serialization.DictBasedSerializer MistralSerializable = ml_serialization.MistralSerializable # PolymorphicSerializer cannot be used from mistral-lib yet # as mistral-lib does not have the unregister method. # Once it does this will be removed also in favor of mistral-lib class PolymorphicSerializer(Serializer): """Polymorphic serializer. The purpose of this class is to server as a serialization router between serializers that can work with entities of particular type. All concrete serializers associated with concrete entity classes should be registered via method 'register', after that an instance of polymorphic serializer can be used as a universal serializer for an RPC system or something else. When converting an object into a string this serializer also writes a special key into the result string sequence so that it's possible to find a proper serializer when deserializing this object. If a primitive value is given as an entity this serializer doesn't do anything special and simply converts a value into a string using jsonutils. Similar when it converts a string into a primitive value. """ def __init__(self): # {serialization key: serializer} self.serializers = {} @staticmethod def _get_serialization_key(entity_cls): if issubclass(entity_cls, MistralSerializable): return entity_cls.get_serialization_key() return None def register(self, entity_cls, serializer): key = self._get_serialization_key(entity_cls) if not key: return if key in self.serializers: raise RuntimeError( "A serializer for the entity class has already been" " registered: %s" % entity_cls ) self.serializers[key] = serializer def unregister(self, entity_cls): key = self._get_serialization_key(entity_cls) if not key: return if key in self.serializers: del self.serializers[key] def cleanup(self): self.serializers.clear() def serialize(self, entity): if entity is None: return None key = self._get_serialization_key(type(entity)) # Primitive or not registered type. if not key: return jsonutils.dumps( jsonutils.to_primitive(entity, convert_instances=True) ) serializer = self.serializers.get(key) if not serializer: raise RuntimeError( "Failed to find a serializer for the key: %s" % key ) result = { '__serial_key': key, '__serial_data': serializer.serialize(entity) } return jsonutils.dumps(result) def deserialize(self, data_str): if data_str is None: return None data = jsonutils.loads(data_str) if isinstance(data, dict) and '__serial_key' in data: serializer = self.serializers.get(data['__serial_key']) return serializer.deserialize(data['__serial_data']) return data def get_polymorphic_serializer(): global _SERIALIZER if _SERIALIZER is None: _SERIALIZER = PolymorphicSerializer() return _SERIALIZER def register_serializer(entity_cls, serializer): get_polymorphic_serializer().register(entity_cls, serializer) def unregister_serializer(entity_cls): get_polymorphic_serializer().unregister(entity_cls) def cleanup(): get_polymorphic_serializer().cleanup() mistral-6.0.0/mistral/lang/0000775000175100017510000000000013245513604015613 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/lang/v2/0000775000175100017510000000000013245513604016142 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/lang/v2/publish.py0000666000175100017510000000362413245513261020170 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral import exceptions as exc from mistral.lang import types from mistral.lang.v2 import base class PublishSpec(base.BaseSpec): _schema = { "type": "object", "properties": { "branch": types.NONEMPTY_DICT, "global": types.NONEMPTY_DICT, "atomic": types.NONEMPTY_DICT }, "additionalProperties": False } def __init__(self, data): super(PublishSpec, self).__init__(data) self._branch = self._data.get('branch') self._global = self._data.get('global') self._atomic = self._data.get('atomic') @classmethod def get_schema(cls, includes=['definitions']): return super(PublishSpec, cls).get_schema(includes) def validate_semantics(self): if not self._branch and not self._global and not self._atomic: raise exc.InvalidModelException( "Either 'branch', 'global' or 'atomic' must be specified: " % self._data ) self.validate_expr(self._branch) self.validate_expr(self._global) self.validate_expr(self._atomic) def get_branch(self): return self._branch def get_global(self): return self._global def get_atomic(self): return self._atomic mistral-6.0.0/mistral/lang/v2/actions.py0000666000175100017510000000522413245513261020160 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import six from mistral.lang import types from mistral.lang.v2 import base from mistral import utils class ActionSpec(base.BaseSpec): # See http://json-schema.org _schema = { "type": "object", "properties": { "base": types.NONEMPTY_STRING, "base-input": types.NONEMPTY_DICT, "input": types.UNIQUE_STRING_OR_ONE_KEY_DICT_LIST, "output": types.ANY_NULLABLE, }, "required": ["base"], "additionalProperties": False } def __init__(self, data): super(ActionSpec, self).__init__(data) self._name = data['name'] self._description = data.get('description') self._tags = data.get('tags', []) self._base = data['base'] self._base_input = data.get('base-input', {}) self._input = utils.get_dict_from_entries(data.get('input', [])) self._output = data.get('output') self._base, _input = self._parse_cmd_and_input(self._base) utils.merge_dicts(self._base_input, _input) def validate_schema(self): super(ActionSpec, self).validate_schema() # Validate YAQL expressions. inline_params = self._parse_cmd_and_input(self._data.get('base'))[1] self.validate_expr(inline_params) self.validate_expr(self._data.get('base-input', {})) if isinstance(self._data.get('output'), six.string_types): self.validate_expr(self._data.get('output')) def get_name(self): return self._name def get_description(self): return self._description def get_tags(self): return self._tags def get_base(self): return self._base def get_base_input(self): return self._base_input def get_input(self): return self._input def get_output(self): return self._output class ActionSpecList(base.BaseSpecList): item_class = ActionSpec class ActionListSpec(base.BaseListSpec): item_class = ActionSpec def get_actions(self): return self.get_items() mistral-6.0.0/mistral/lang/v2/policies.py0000666000175100017510000000511313245513261020324 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.lang import types from mistral.lang.v2 import base from mistral.lang.v2 import retry_policy class PoliciesSpec(base.BaseSpec): # See http://json-schema.org _schema = { "type": "object", "properties": { "retry": types.ANY, "wait-before": types.EXPRESSION_OR_POSITIVE_INTEGER, "wait-after": types.EXPRESSION_OR_POSITIVE_INTEGER, "timeout": types.EXPRESSION_OR_POSITIVE_INTEGER, "pause-before": types.EXPRESSION_OR_BOOLEAN, "concurrency": types.EXPRESSION_OR_POSITIVE_INTEGER, }, "additionalProperties": False } @classmethod def get_schema(cls, includes=['definitions']): return super(PoliciesSpec, cls).get_schema(includes) def __init__(self, data): super(PoliciesSpec, self).__init__(data) self._retry = self._spec_property('retry', retry_policy.RetrySpec) self._wait_before = data.get('wait-before', 0) self._wait_after = data.get('wait-after', 0) self._timeout = data.get('timeout', 0) self._pause_before = data.get('pause-before', False) self._concurrency = data.get('concurrency', 0) def validate_schema(self): super(PoliciesSpec, self).validate_schema() # Validate YAQL expressions. self.validate_expr(self._data.get('wait-before', 0)) self.validate_expr(self._data.get('wait-after', 0)) self.validate_expr(self._data.get('timeout', 0)) self.validate_expr(self._data.get('pause-before', False)) self.validate_expr(self._data.get('concurrency', 0)) def get_retry(self): return self._retry def get_wait_before(self): return self._wait_before def get_wait_after(self): return self._wait_after def get_timeout(self): return self._timeout def get_pause_before(self): return self._pause_before def get_concurrency(self): return self._concurrency mistral-6.0.0/mistral/lang/v2/base.py0000666000175100017510000000220213245513261017423 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.lang import base from mistral.lang import types class BaseSpec(base.BaseSpec): _version = "2.0" _meta_schema = { "type": "object", "properties": { "name": types.NONEMPTY_STRING, "version": types.VERSION, "description": types.NONEMPTY_STRING, "tags": types.UNIQUE_STRING_LIST }, "required": ["name", "version"] } class BaseSpecList(base.BaseSpecList): _version = "2.0" class BaseListSpec(base.BaseListSpec): _version = "2.0" mistral-6.0.0/mistral/lang/v2/retry_policy.py0000666000175100017510000000517313245513261021247 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import six from mistral.lang import types from mistral.lang.v2 import base class RetrySpec(base.BaseSpec): # See http://json-schema.org _retry_dict_schema = { "type": "object", "properties": { "count": { "oneOf": [ types.EXPRESSION, types.POSITIVE_INTEGER ] }, "break-on": types.EXPRESSION, "continue-on": types.EXPRESSION, "delay": { "oneOf": [ types.EXPRESSION, types.POSITIVE_INTEGER ] }, }, "required": ["delay", "count"], "additionalProperties": False } _schema = { "oneOf": [ _retry_dict_schema, types.NONEMPTY_STRING ] } @classmethod def get_schema(cls, includes=['definitions']): return super(RetrySpec, cls).get_schema(includes) def __init__(self, data): data = self._transform_retry_one_line(data) super(RetrySpec, self).__init__(data) self._break_on = data.get('break-on') self._count = data.get('count') self._continue_on = data.get('continue-on') self._delay = data['delay'] def _transform_retry_one_line(self, retry): if isinstance(retry, six.string_types): _, params = self._parse_cmd_and_input(retry) return params return retry def validate_schema(self): super(RetrySpec, self).validate_schema() # Validate YAQL expressions. self.validate_expr(self._data.get('count')) self.validate_expr(self._data.get('delay')) self.validate_expr(self._data.get('break-on')) self.validate_expr(self._data.get('continue-on')) def get_count(self): return self._count def get_break_on(self): return self._break_on def get_continue_on(self): return self._continue_on def get_delay(self): return self._delay mistral-6.0.0/mistral/lang/v2/tasks.py0000666000175100017510000002542013245513261017645 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import json import re import six from mistral import exceptions as exc from mistral import expressions from mistral.lang import types from mistral.lang.v2 import base from mistral.lang.v2 import on_clause from mistral.lang.v2 import policies from mistral.lang.v2 import publish from mistral import utils from mistral.workflow import states _expr_ptrns = [expressions.patterns[name] for name in expressions.patterns] WITH_ITEMS_PTRN = re.compile( "\s*([\w\d_\-]+)\s*in\s*(\[.+\]|%s)" % '|'.join(_expr_ptrns) ) RESERVED_TASK_NAMES = [ 'noop', 'fail', 'succeed', 'pause' ] class TaskSpec(base.BaseSpec): # See http://json-schema.org _polymorphic_key = ('type', 'direct') _schema = { "type": "object", "properties": { "type": types.WORKFLOW_TYPE, "action": types.NONEMPTY_STRING, "workflow": types.NONEMPTY_STRING, "input": { "oneOf": [ types.NONEMPTY_DICT, types.NONEMPTY_STRING ] }, "with-items": { "oneOf": [ types.NONEMPTY_STRING, types.UNIQUE_STRING_LIST ] }, "publish": types.NONEMPTY_DICT, "publish-on-error": types.NONEMPTY_DICT, "retry": types.ANY, "wait-before": types.ANY, "wait-after": types.ANY, "timeout": types.ANY, "pause-before": types.ANY, "concurrency": types.ANY, "target": types.NONEMPTY_STRING, "keep-result": types.EXPRESSION_OR_BOOLEAN, "safe-rerun": types.EXPRESSION_OR_BOOLEAN }, "additionalProperties": False, "anyOf": [ { "not": { "type": "object", "required": ["action", "workflow"] }, }, { "oneOf": [ { "type": "object", "required": ["action"] }, { "type": "object", "required": ["workflow"] } ] } ] } def __init__(self, data): super(TaskSpec, self).__init__(data) self._name = data['name'] self._description = data.get('description') self._action = data.get('action') self._workflow = data.get('workflow') self._input = data.get('input', {}) self._with_items = self._transform_with_items() self._publish = data.get('publish', {}) self._publish_on_error = data.get('publish-on-error', {}) self._policies = self._group_spec( policies.PoliciesSpec, 'retry', 'wait-before', 'wait-after', 'timeout', 'pause-before', 'concurrency' ) self._target = data.get('target') self._keep_result = data.get('keep-result', True) self._safe_rerun = data.get('safe-rerun', False) self._process_action_and_workflow() def validate_schema(self): super(TaskSpec, self).validate_schema() action = self._data.get('action') workflow = self._data.get('workflow') self._validate_name() # Validate YAQL expressions. if action or workflow: inline_params = self._parse_cmd_and_input(action or workflow)[1] self.validate_expr(inline_params) self.validate_expr(self._data.get('input', {})) self.validate_expr(self._data.get('publish', {})) self.validate_expr(self._data.get('publish-on-error', {})) self.validate_expr(self._data.get('keep-result', {})) self.validate_expr(self._data.get('safe-rerun', {})) def _validate_name(self): task_name = self._data.get('name') if task_name in RESERVED_TASK_NAMES: raise exc.InvalidModelException( "Reserved keyword '%s' not allowed as task name." % task_name ) def _transform_with_items(self): raw = self._data.get('with-items', []) with_items = {} if isinstance(raw, six.string_types): raw = [raw] for item in raw: if not isinstance(item, six.string_types): raise exc.InvalidModelException( "'with-items' elements should be strings: %s" % self._data ) match = re.match(WITH_ITEMS_PTRN, item) if not match: msg = ("Wrong format of 'with-items' property. Please use " "format 'var in {[some, list] | <%% $.array %%> }: " "%s" % self._data) raise exc.InvalidModelException(msg) match_groups = match.groups() var_name = match_groups[0] array = match_groups[1] # Validate YAQL expression that may follow after "in" for the # with-items syntax "var in {[some, list] | <% $.array %> }". self.validate_expr(array) if array.startswith('['): try: array = json.loads(array) except Exception as e: msg = ("Invalid array in 'with-items' clause: " "%s, error: %s" % (array, str(e))) raise exc.InvalidModelException(msg) with_items[var_name] = array return with_items def _process_action_and_workflow(self): params = {} if self._action: self._action, params = self._parse_cmd_and_input(self._action) elif self._workflow: self._workflow, params = self._parse_cmd_and_input( self._workflow) else: self._action = 'std.noop' utils.merge_dicts(self._input, params) def get_name(self): return self._name def get_description(self): return self._description def get_action_name(self): return self._action if self._action else None def get_workflow_name(self): return self._workflow def get_input(self): return self._input def get_with_items(self): return self._with_items def get_policies(self): return self._policies def get_target(self): return self._target def get_publish(self, state): spec = None if state == states.SUCCESS and self._publish: spec = publish.PublishSpec({'branch': self._publish}) elif state == states.ERROR and self._publish_on_error: spec = publish.PublishSpec( {'branch': self._publish_on_error} ) return spec def get_keep_result(self): return self._keep_result def get_safe_rerun(self): return self._safe_rerun def get_type(self): return (utils.WORKFLOW_TASK_TYPE if self._workflow else utils.ACTION_TASK_TYPE) class DirectWorkflowTaskSpec(TaskSpec): _polymorphic_value = 'direct' _direct_workflow_schema = { "type": "object", "properties": { "type": {"enum": [_polymorphic_value]}, "join": { "oneOf": [ {"enum": ["all", "one"]}, types.POSITIVE_INTEGER ] }, "on-complete": types.ANY, "on-success": types.ANY, "on-error": types.ANY } } _schema = utils.merge_dicts( copy.deepcopy(TaskSpec._schema), _direct_workflow_schema ) def __init__(self, data): super(DirectWorkflowTaskSpec, self).__init__(data) self._join = data.get('join') on_spec_cls = on_clause.OnClauseSpec self._on_complete = self._spec_property('on-complete', on_spec_cls) self._on_success = self._spec_property('on-success', on_spec_cls) self._on_error = self._spec_property('on-error', on_spec_cls) def validate_semantics(self): # Validate YAQL expressions. self._validate_transitions(self._on_complete) self._validate_transitions(self._on_success) self._validate_transitions(self._on_error) def _validate_transitions(self, on_clause_spec): val = on_clause_spec.get_next() if on_clause_spec else [] if not val: return [self.validate_expr(t) for t in ([val] if isinstance(val, six.string_types) else val)] def get_publish(self, state): spec = super(DirectWorkflowTaskSpec, self).get_publish(state) # TODO(rakhmerov): How do we need to resolve a possible conflict # between 'on-complete' and 'on-success/on-error' and # 'publish/publish-on-error'? For now we assume that 'on-error' # and 'on-success' take precedence over on-complete. on_clause = self._on_complete if state == states.SUCCESS: on_clause = self._on_success elif state == states.ERROR: on_clause = self._on_error if not on_clause: return spec return on_clause.get_publish() or spec def get_join(self): return self._join def get_on_complete(self): return self._on_complete def get_on_success(self): return self._on_success def get_on_error(self): return self._on_error class ReverseWorkflowTaskSpec(TaskSpec): _polymorphic_value = 'reverse' _reverse_workflow_schema = { "type": "object", "properties": { "type": {"enum": [_polymorphic_value]}, "requires": { "oneOf": [types.NONEMPTY_STRING, types.UNIQUE_STRING_LIST] } } } _schema = utils.merge_dicts( copy.deepcopy(TaskSpec._schema), _reverse_workflow_schema ) def __init__(self, data): super(ReverseWorkflowTaskSpec, self).__init__(data) self._requires = data.get('requires', []) def get_requires(self): if isinstance(self._requires, six.string_types): return [self._requires] return self._requires class TaskSpecList(base.BaseSpecList): item_class = TaskSpec mistral-6.0.0/mistral/lang/v2/__init__.py0000666000175100017510000000000013245513261020242 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/lang/v2/workbook.py0000666000175100017510000000470713245513261020362 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.lang import types from mistral.lang.v2 import actions as act from mistral.lang.v2 import base from mistral.lang.v2 import workflows as wf # We want to match any single word that isn't exactly "version" NON_VERSION_WORD_REGEX = "^(?!version$)[\w-]+$" class WorkbookSpec(base.BaseSpec): # See http://json-schema.org _schema = { "type": "object", "properties": { "version": {"enum": ["2.0", 2.0]}, "actions": { "type": "object", "minProperties": 1, "patternProperties": { "^version$": {"enum": ["2.0", 2.0]}, NON_VERSION_WORD_REGEX: types.ANY }, "additionalProperties": False }, "workflows": { "type": "object", "minProperties": 1, "patternProperties": { "^version$": {"enum": ["2.0", 2.0]}, NON_VERSION_WORD_REGEX: types.ANY }, "additionalProperties": False } }, "additionalProperties": False } def __init__(self, data): super(WorkbookSpec, self).__init__(data) self._inject_version(['actions', 'workflows']) self._name = data['name'] self._description = data.get('description') self._tags = data.get('tags', []) self._actions = self._spec_property('actions', act.ActionSpecList) self._workflows = self._spec_property('workflows', wf.WorkflowSpecList) def get_name(self): return self._name def get_description(self): return self._description def get_tags(self): return self._tags def get_actions(self): return self._actions def get_workflows(self): return self._workflows mistral-6.0.0/mistral/lang/v2/workflows.py0000666000175100017510000002670513245513261020564 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_utils import uuidutils import six import threading from mistral import exceptions as exc from mistral.lang import types from mistral.lang.v2 import base from mistral.lang.v2 import task_defaults from mistral.lang.v2 import tasks from mistral import utils class WorkflowSpec(base.BaseSpec): # See http://json-schema.org _polymorphic_key = ('type', 'direct') _meta_schema = { "type": "object", "properties": { "type": types.WORKFLOW_TYPE, "task-defaults": types.NONEMPTY_DICT, "input": types.UNIQUE_STRING_OR_ONE_KEY_DICT_LIST, "output": types.NONEMPTY_DICT, "output-on-error": types.NONEMPTY_DICT, "vars": types.NONEMPTY_DICT, "tags": types.UNIQUE_STRING_LIST }, "required": ["tasks"], "additionalProperties": False } def __init__(self, data): super(WorkflowSpec, self).__init__(data) self._name = data['name'] self._description = data.get('description') self._tags = data.get('tags', []) self._type = data['type'] if 'type' in data else 'direct' self._input = utils.get_dict_from_entries(data.get('input', [])) self._output = data.get('output', {}) self._output_on_error = data.get('output-on-error', {}) self._vars = data.get('vars', {}) self._task_defaults = self._spec_property( 'task-defaults', task_defaults.TaskDefaultsSpec ) # Inject 'type' here, so instantiate_spec function can recognize the # specific subclass of TaskSpec. for task in six.itervalues(self._data.get('tasks')): task['type'] = self._type self._tasks = self._spec_property('tasks', tasks.TaskSpecList) def validate_schema(self): super(WorkflowSpec, self).validate_schema() if not self._data.get('tasks'): raise exc.InvalidModelException( "Workflow doesn't have any tasks [data=%s]" % self._data ) # Validate expressions. self.validate_expr(self._data.get('output', {})) self.validate_expr(self._data.get('vars', {})) def validate_semantics(self): super(WorkflowSpec, self).validate_semantics() # Distinguish workflow name from workflow UUID. if uuidutils.is_uuid_like(self._name): raise exc.InvalidModelException( "Workflow name cannot be in the format of UUID." ) def _validate_task_link(self, task_name, allow_engine_cmds=True): valid_task = self._task_exists(task_name) if allow_engine_cmds: valid_task |= task_name in tasks.RESERVED_TASK_NAMES if not valid_task: raise exc.InvalidModelException( "Task '%s' not found." % task_name ) def _task_exists(self, task_name): return self.get_tasks()[task_name] is not None def get_name(self): return self._name def get_description(self): return self._description def get_tags(self): return self._tags def get_type(self): return self._type def get_input(self): return self._input def get_output(self): return self._output def get_output_on_error(self): return self._output_on_error def get_vars(self): return self._vars def get_task_defaults(self): return self._task_defaults def get_tasks(self): return self._tasks def get_task(self, name): return self._tasks[name] class DirectWorkflowSpec(WorkflowSpec): _polymorphic_value = 'direct' _schema = { "properties": { "tasks": { "type": "object", "minProperties": 1, "patternProperties": { "^\w+$": types.NONEMPTY_DICT } }, } } def __init__(self, data): super(DirectWorkflowSpec, self).__init__(data) # Init simple dictionary based caches for inbound and # outbound task specifications. In fact, we don't need # any special cache implementations here because these # structures can't grow indefinitely. self.inbound_tasks_cache_lock = threading.RLock() self.inbound_tasks_cache = {} self.outbound_tasks_cache_lock = threading.RLock() self.outbound_tasks_cache = {} def validate_semantics(self): super(DirectWorkflowSpec, self).validate_semantics() # Check if there are start tasks. if not self.find_start_tasks(): raise exc.DSLParsingException( 'Failed to find start tasks in direct workflow. ' 'There must be at least one task without inbound transition.' '[workflow_name=%s]' % self._name ) self._check_workflow_integrity() self._check_join_tasks() def _check_workflow_integrity(self): for t_s in self.get_tasks(): out_task_names = self.find_outbound_task_names(t_s.get_name()) for out_t_name in out_task_names: self._validate_task_link(out_t_name) def _check_join_tasks(self): join_tasks = [t for t in self.get_tasks() if t.get_join()] err_msgs = [] for join_t in join_tasks: t_name = join_t.get_name() join_val = join_t.get_join() in_tasks = self.find_inbound_task_specs(join_t) if join_val == 'all': if len(in_tasks) == 0: err_msgs.append( "No inbound tasks for task with 'join: all'" " [task_name=%s]" % t_name ) continue if join_val == 'one': join_val = 1 if len(in_tasks) < join_val: err_msgs.append( "Not enough inbound tasks for task with 'join'" " [task_name=%s, join=%s, inbound_tasks=%s]" % (t_name, join_val, len(in_tasks)) ) if len(err_msgs) > 0: raise exc.InvalidModelException('\n'.join(err_msgs)) def find_start_tasks(self): return [ t_s for t_s in self.get_tasks() if not self.has_inbound_transitions(t_s) ] def find_inbound_task_specs(self, task_spec): task_name = task_spec.get_name() with self.inbound_tasks_cache_lock: specs = self.inbound_tasks_cache.get(task_name) if specs is not None: return specs specs = [ t_s for t_s in self.get_tasks() if self.transition_exists(t_s.get_name(), task_name) ] with self.inbound_tasks_cache_lock: self.inbound_tasks_cache[task_name] = specs return specs def find_outbound_task_specs(self, task_spec): task_name = task_spec.get_name() with self.outbound_tasks_cache_lock: specs = self.outbound_tasks_cache.get(task_name) if specs is not None: return specs specs = [ t_s for t_s in self.get_tasks() if self.transition_exists(task_name, t_s.get_name()) ] with self.outbound_tasks_cache_lock: self.outbound_tasks_cache[task_name] = specs return specs def has_inbound_transitions(self, task_spec): return len(self.find_inbound_task_specs(task_spec)) > 0 def has_outbound_transitions(self, task_spec): return len(self.find_outbound_task_specs(task_spec)) > 0 def find_outbound_task_names(self, task_name): t_names = set() for tup in self.get_on_error_clause(task_name): t_names.add(tup[0]) for tup in self.get_on_success_clause(task_name): t_names.add(tup[0]) for tup in self.get_on_complete_clause(task_name): t_names.add(tup[0]) return t_names def transition_exists(self, from_task_name, to_task_name): t_names = self.find_outbound_task_names(from_task_name) return to_task_name in t_names def get_on_error_clause(self, t_name): result = [] on_clause = self.get_tasks()[t_name].get_on_error() if on_clause: result = on_clause.get_next() if not result: t_defaults = self.get_task_defaults() if t_defaults and t_defaults.get_on_error(): result = self._remove_task_from_clause( t_defaults.get_on_error().get_next(), t_name ) return result def get_on_success_clause(self, t_name): result = [] on_clause = self.get_tasks()[t_name].get_on_success() if on_clause: result = on_clause.get_next() if not result: t_defaults = self.get_task_defaults() if t_defaults and t_defaults.get_on_success(): result = self._remove_task_from_clause( t_defaults.get_on_success().get_next(), t_name ) return result def get_on_complete_clause(self, t_name): result = [] on_clause = self.get_tasks()[t_name].get_on_complete() if on_clause: result = on_clause.get_next() if not result: t_defaults = self.get_task_defaults() if t_defaults and t_defaults.get_on_complete(): result = self._remove_task_from_clause( t_defaults.get_on_complete().get_next(), t_name ) return result @staticmethod def _remove_task_from_clause(on_clause, t_name): return list([tup for tup in on_clause if tup[0] != t_name]) class ReverseWorkflowSpec(WorkflowSpec): _polymorphic_value = 'reverse' _schema = { "properties": { "tasks": { "type": "object", "minProperties": 1, "patternProperties": { "^\w+$": types.NONEMPTY_DICT } }, } } def validate_semantics(self): super(ReverseWorkflowSpec, self).validate_semantics() self._check_workflow_integrity() def _check_workflow_integrity(self): for t_s in self.get_tasks(): for req in self.get_task_requires(t_s): self._validate_task_link(req, allow_engine_cmds=False) def get_task_requires(self, task_spec): requires = set(task_spec.get_requires()) defaults = self.get_task_defaults() if defaults: requires |= set(defaults.get_requires()) requires.discard(task_spec.get_name()) return list(requires) class WorkflowSpecList(base.BaseSpecList): item_class = WorkflowSpec class WorkflowListSpec(base.BaseListSpec): item_class = WorkflowSpec def get_workflows(self): return self.get_items() mistral-6.0.0/mistral/lang/v2/task_defaults.py0000666000175100017510000000652013245513261021351 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import six from mistral.lang import types from mistral.lang.v2 import base from mistral.lang.v2 import on_clause from mistral.lang.v2 import policies # TODO(rakhmerov): This specification should be broken into two separate # specs for direct and reverse workflows. It's weird to combine them into # one because they address different use cases. class TaskDefaultsSpec(base.BaseSpec): # See http://json-schema.org _schema = { "type": "object", "properties": { "retry": types.ANY, "wait-before": types.ANY, "wait-after": types.ANY, "timeout": types.ANY, "pause-before": types.ANY, "concurrency": types.ANY, "on-complete": types.ANY, "on-success": types.ANY, "on-error": types.ANY, "requires": { "oneOf": [types.NONEMPTY_STRING, types.UNIQUE_STRING_LIST] } }, "additionalProperties": False } @classmethod def get_schema(cls, includes=['definitions']): return super(TaskDefaultsSpec, cls).get_schema(includes) def __init__(self, data): super(TaskDefaultsSpec, self).__init__(data) self._policies = self._group_spec( policies.PoliciesSpec, 'retry', 'wait-before', 'wait-after', 'timeout', 'pause-before', 'concurrency' ) on_spec_cls = on_clause.OnClauseSpec self._on_complete = self._spec_property('on-complete', on_spec_cls) self._on_success = self._spec_property('on-success', on_spec_cls) self._on_error = self._spec_property('on-error', on_spec_cls) # TODO(rakhmerov): 'requires' should reside in a different spec for # reverse workflows. self._requires = data.get('requires', []) def validate_semantics(self): # Validate YAQL expressions. self._validate_transitions(self._on_complete) self._validate_transitions(self._on_success) self._validate_transitions(self._on_error) def _validate_transitions(self, on_clause_spec): val = on_clause_spec.get_next() if on_clause_spec else [] if not val: return [self.validate_expr(t) for t in ([val] if isinstance(val, six.string_types) else val)] def get_policies(self): return self._policies def get_on_complete(self): return self._on_complete def get_on_success(self): return self._on_success def get_on_error(self): return self._on_error def get_requires(self): if isinstance(self._requires, six.string_types): return [self._requires] return self._requires mistral-6.0.0/mistral/lang/v2/on_clause.py0000666000175100017510000000464013245513261020471 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import six from mistral.lang import types from mistral.lang.v2 import base from mistral.lang.v2 import publish class OnClauseSpec(base.BaseSpec): _simple_schema = { "oneOf": [ types.NONEMPTY_STRING, types.UNIQUE_STRING_OR_EXPRESSION_CONDITION_LIST ] } _advanced_schema = { "type": "object", "properties": { "publish": types.NONEMPTY_DICT, "next": _simple_schema, }, "additionalProperties": False } _schema = {"oneOf": [_simple_schema, _advanced_schema]} def __init__(self, data): super(OnClauseSpec, self).__init__(data) if not isinstance(data, dict): # Old simple schema. self._publish = None self._next = prepare_next_clause(data) else: # New advanced schema. self._publish = self._spec_property('publish', publish.PublishSpec) self._next = prepare_next_clause(data.get('next')) @classmethod def get_schema(cls, includes=['definitions']): return super(OnClauseSpec, cls).get_schema(includes) def get_publish(self): return self._publish def get_next(self): return self._next def _as_list_of_tuples(data): if not data: return [] if isinstance(data, six.string_types): return [_as_tuple(data)] return [_as_tuple(item) for item in data] def _as_tuple(val): return list(val.items())[0] if isinstance(val, dict) else (val, '') def prepare_next_clause(next_clause): list_of_tuples = _as_list_of_tuples(next_clause) for i, task in enumerate(list_of_tuples): task_name, params = OnClauseSpec._parse_cmd_and_input(task[0]) list_of_tuples[i] = (task_name, task[1], params) return list_of_tuples mistral-6.0.0/mistral/lang/base.py0000666000175100017510000002766113245513261017114 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import json import jsonschema import re import six from mistral import exceptions as exc from mistral import expressions as expr from mistral.expressions.jinja_expression import ANY_JINJA_REGEXP from mistral.expressions.yaql_expression import INLINE_YAQL_REGEXP from mistral.lang import types from mistral import utils ACTION_PATTERNS = { "command": "[\w\.]+[^=\(\s\"]*", "yaql_expression": INLINE_YAQL_REGEXP, "jinja_expression": ANY_JINJA_REGEXP, } CMD_PTRN = re.compile( "^({})".format("|".join(six.itervalues(ACTION_PATTERNS))) ) EXPRESSION = '|'.join([expr.patterns[name] for name in expr.patterns]) _ALL_IN_BRACKETS = "\[.*\]\s*" _ALL_IN_QUOTES = "\"[^\"]*\"\s*" _ALL_IN_APOSTROPHES = "'[^']*'\s*" _DIGITS = "\d+" _TRUE = "true" _FALSE = "false" _NULL = "null" ALL = ( _ALL_IN_QUOTES, _ALL_IN_APOSTROPHES, EXPRESSION, _ALL_IN_BRACKETS, _TRUE, _FALSE, _NULL, _DIGITS ) PARAMS_PTRN = re.compile("([-_\w]+)=(%s)" % "|".join(ALL)) def instantiate_spec(spec_cls, data): """Instantiates specification accounting for specification hierarchies. :param spec_cls: Specification concrete or base class. In case if base class or the hierarchy is provided this method relies on attributes _polymorphic_key and _polymorphic_value in order to find a concrete class that needs to be instantiated. :param data: Raw specification data as a dictionary. """ if issubclass(spec_cls, BaseSpecList): # Ignore polymorphic search for specification lists because # it doesn't make sense for them. return spec_cls(data) if not hasattr(spec_cls, '_polymorphic_key'): spec = spec_cls(data) spec.validate_semantics() return spec # In order to do polymorphic search we need to make sure that # a spec is backed by a dictionary. Otherwise we can't extract # a polymorphic key. if not isinstance(data, dict): raise exc.InvalidModelException( "A specification with polymorphic key must be backed by" " a dictionary [spec_cls=%s, data=%s]" % (spec_cls, data) ) key = spec_cls._polymorphic_key if not isinstance(key, tuple): key_name = key key_default = None else: key_name = key[0] key_default = key[1] for cls in utils.iter_subclasses(spec_cls): if not hasattr(cls, '_polymorphic_value'): raise exc.DSLParsingException( "Class '%s' is expected to have attribute '_polymorphic_value'" " because it's a part of specification hierarchy inherited " "from class '%s'." % (cls, spec_cls) ) if cls._polymorphic_value == data.get(key_name, key_default): spec = cls(data) spec.validate_semantics() return spec raise exc.DSLParsingException( 'Failed to find a specification class to instantiate ' '[spec_cls=%s, data=%s]' % (spec_cls, data) ) class BaseSpec(object): """Base class for all DSL specifications. It represents a DSL entity such as workflow or task as a python object providing more convenient API to analyse DSL than just working with raw data in form of a dictionary. Specification classes also implement all required validation logic by overriding instance methods 'validate_schema()' and 'validate_semantics()'. Note that the specification mechanism allows to have polymorphic entities in DSL. For example, if we find it more convenient to have separate specification classes for different types of workflow (i.e. 'direct' and 'reverse') we can do so. In this case, in order to instantiate them correctly method 'instantiate_spec' must always be used where argument 'spec_cls' must be a root class of the specification hierarchy containing class attribute '_polymorhpic_key' pointing to a key in raw data relying on which we can find a concrete class. Concrete classes then must all have attribute '_polymorhpic_value' corresponding to a value in a raw data. Attribute '_polymorhpic_key' can be either a string or a tuple of size two where the first value is a key name itself and the second value is a default polymorphic value that must be used if raw data doesn't contain a configured key at all. An example of this situation is when we don't specify a workflow type in DSL. In this case, we assume it's 'direct'. """ # See http://json-schema.org _schema = { 'type': 'object' } _meta_schema = { 'type': 'object' } _definitions = {} _version = '2.0' @classmethod def get_schema(cls, includes=['meta', 'definitions']): schema = copy.deepcopy(cls._schema) schema['properties'] = utils.merge_dicts( schema.get('properties', {}), cls._meta_schema.get('properties', {}), overwrite=False ) if includes and 'meta' in includes: schema['required'] = list( set(schema.get('required', []) + cls._meta_schema.get('required', [])) ) if includes and 'definitions' in includes: schema['definitions'] = utils.merge_dicts( schema.get('definitions', {}), cls._definitions, overwrite=False ) return schema def __init__(self, data): self._data = data self.validate_schema() def validate_schema(self): """Validates DSL entity schema that this specification represents. By default, this method just validate schema of DSL entity that this specification represents using "_schema" class attribute. Additionally, child classes may implement additional logic to validate more specific things like YAQL expressions in their fields. Note that this method is called before construction of specification fields and validation logic should only rely on raw data provided as a dictionary accessible through '_data' instance field. """ try: jsonschema.validate(self._data, self.get_schema()) except jsonschema.ValidationError as e: raise exc.InvalidModelException("Invalid DSL: %s" % e) def validate_semantics(self): """Validates semantics of specification object. Child classes may implement validation logic to check things like integrity of corresponding data structure (e.g. task graph) or other things that can't be expressed in JSON schema. This method is called after specification has been built (i.e. its initializer has finished it's work) so that validation logic can rely on initialized specification fields. """ pass def validate_expr(self, dsl_part): if isinstance(dsl_part, six.string_types): expr.validate(dsl_part) elif isinstance(dsl_part, (list, tuple)): for expression in dsl_part: if isinstance(expression, six.string_types): expr.validate(expression) elif isinstance(dsl_part, dict): for expression in dsl_part.values(): if isinstance(expression, six.string_types): expr.validate(expression) def _spec_property(self, prop_name, spec_cls): prop_val = self._data.get(prop_name) return ( instantiate_spec(spec_cls, prop_val) if prop_val is not None else None ) def _group_spec(self, spec_cls, *prop_names): if not prop_names: return None data = {} for prop_name in prop_names: prop_val = self._data.get(prop_name) if prop_val: data[prop_name] = prop_val return instantiate_spec(spec_cls, data) def _inject_version(self, prop_names): for prop_name in prop_names: prop_data = self._data.get(prop_name) if isinstance(prop_data, dict): prop_data['version'] = self._version def _as_dict(self, prop_name): prop_val = self._data.get(prop_name) if not prop_val: return {} if isinstance(prop_val, dict): return prop_val elif isinstance(prop_val, list): result = {} for t in prop_val: result.update(t if isinstance(t, dict) else {t: ''}) return result elif isinstance(prop_val, six.string_types): return {prop_val: ''} @staticmethod def _parse_cmd_and_input(cmd_str): # TODO(rakhmerov): Try to find a way with one expression. cmd_matcher = CMD_PTRN.search(cmd_str) if not cmd_matcher: msg = "Invalid action/workflow task property: %s" % cmd_str raise exc.InvalidModelException(msg) cmd = cmd_matcher.group() params = {} for match in re.findall(PARAMS_PTRN, cmd_str): k = match[0] # Remove embracing quotes. v = match[1].strip() if v[0] == '"' or v[0] == "'": v = v[1:-1] else: try: v = json.loads(v) except Exception: pass params[k] = v return cmd, params def to_dict(self): return self._data def get_version(self): return self._version def __repr__(self): return "%s %s" % (self.__class__.__name__, self.to_dict()) class BaseListSpec(BaseSpec): item_class = None _schema = { "type": "object", "properties": { "version": types.VERSION }, "additionalProperties": types.NONEMPTY_DICT, "required": ["version"], } def __init__(self, data): super(BaseListSpec, self).__init__(data) self.items = [] for k, v in data.items(): if k != 'version': v['name'] = k self._inject_version([k]) self.items.append(instantiate_spec(self.item_class, v)) def validate_schema(self): super(BaseListSpec, self).validate_schema() if len(self._data.keys()) < 2: raise exc.InvalidModelException( 'At least one item must be in the list [data=%s].' % self._data ) def get_items(self): return self.items def __getitem__(self, idx): return self.items[idx] def __len__(self): return len(self.items) class BaseSpecList(object): item_class = None _version = '2.0' def __init__(self, data): self.items = {} for k, v in data.items(): if k != 'version': # At this point, we don't know if item schema is valid, # it may not be even a dictionary. So we should check the # type first before manipulating with it. if isinstance(v, dict): v['name'] = k v['version'] = self._version self.items[k] = instantiate_spec(self.item_class, v) def item_keys(self): return self.items.keys() def __iter__(self): return six.itervalues(self.items) def __getitem__(self, name): return self.items.get(name) def __len__(self): return len(self.items) def get(self, name): return self.__getitem__(name) mistral-6.0.0/mistral/lang/__init__.py0000666000175100017510000000000013245513261017713 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/lang/parser.py0000666000175100017510000001646613245513261017477 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import cachetools import threading import yaml from yaml import error import six from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.lang import base from mistral.lang.v2 import actions as actions_v2 from mistral.lang.v2 import tasks as tasks_v2 from mistral.lang.v2 import workbook as wb_v2 from mistral.lang.v2 import workflows as wf_v2 V2_0 = '2.0' ALL_VERSIONS = [V2_0] _WF_EX_CACHE = cachetools.LRUCache(maxsize=100) _WF_EX_CACHE_LOCK = threading.RLock() _WF_DEF_CACHE = cachetools.LRUCache(maxsize=100) _WF_DEF_CACHE_LOCK = threading.RLock() def parse_yaml(text): """Loads a text in YAML format as dictionary object. :param text: YAML text. :return: Parsed YAML document as dictionary. """ try: return yaml.safe_load(text) or {} except error.YAMLError as e: raise exc.DSLParsingException( "Definition could not be parsed: %s\n" % e ) def _get_spec_version(spec_dict): # If version is not specified it will '2.0' by default. ver = V2_0 if 'version' in spec_dict: ver = spec_dict['version'] def _raise(ver): raise exc.DSLParsingException('Unsupported DSL version: %s' % ver) try: str_ver = str(float(ver)) except (ValueError, TypeError): _raise(ver) if not ver or str_ver not in ALL_VERSIONS: _raise(ver) return ver # Factory methods to get specifications either from raw YAML formatted text or # from dictionaries parsed from YAML formatted text. def get_workbook_spec(spec_dict): if _get_spec_version(spec_dict) == V2_0: return base.instantiate_spec(wb_v2.WorkbookSpec, spec_dict) return None def get_workbook_spec_from_yaml(text): return get_workbook_spec(parse_yaml(text)) def get_action_spec(spec_dict): if _get_spec_version(spec_dict) == V2_0: return base.instantiate_spec(actions_v2.ActionSpec, spec_dict) return None def get_action_spec_from_yaml(text, action_name): spec_dict = parse_yaml(text) spec_dict['name'] = action_name return get_action_spec(spec_dict) def get_action_list_spec(spec_dict): return base.instantiate_spec(actions_v2.ActionListSpec, spec_dict) def get_action_list_spec_from_yaml(text): return get_action_list_spec(parse_yaml(text)) def get_workflow_spec(spec_dict): """Get workflow specification object from dictionary. NOTE: For large workflows this method can work very long (seconds). For this reason, method 'get_workflow_spec_by_definition_id' or 'get_workflow_spec_by_execution_id' should be used whenever possible because they cache specification objects. :param spec_dict: Raw specification dictionary. """ if _get_spec_version(spec_dict) == V2_0: return base.instantiate_spec(wf_v2.WorkflowSpec, spec_dict) return None def get_workflow_list_spec(spec_dict): return base.instantiate_spec(wf_v2.WorkflowListSpec, spec_dict) def get_workflow_spec_from_yaml(text): return get_workflow_spec(parse_yaml(text)) def get_workflow_list_spec_from_yaml(text): return get_workflow_list_spec(parse_yaml(text)) def get_task_spec(spec_dict): if _get_spec_version(spec_dict) == V2_0: return base.instantiate_spec(tasks_v2.TaskSpec, spec_dict) return None def get_workflow_definition(wb_def, wf_name): wf_name = wf_name + ":" return _parse_def_from_wb(wb_def, "workflows:", wf_name) def get_action_definition(wb_def, action_name): action_name += ":" return _parse_def_from_wb(wb_def, "actions:", action_name) def _parse_def_from_wb(wb_def, section_name, item_name): io = six.StringIO(wb_def[wb_def.index(section_name):]) io.readline() definition = [] ident = 0 # Get the indentation of the action/workflow name tag. for line in io: if item_name == line.strip(): ident = line.index(item_name) definition.append(line.lstrip()) break # Add strings to list unless same/less indentation is found. for line in io: new_line = line.strip() if not new_line: definition.append(line) elif new_line.startswith("#"): new_line = line if ident > line.index("#") else line[ident:] definition.append(new_line) else: temp = line.index(line.lstrip()) if ident < temp: definition.append(line[ident:]) else: break io.close() definition = ''.join(definition).rstrip() + '\n' return definition # Methods for obtaining specifications in a more efficient way using # caching techniques. @cachetools.cached(_WF_EX_CACHE, lock=_WF_EX_CACHE_LOCK) def get_workflow_spec_by_execution_id(wf_ex_id): """Gets workflow specification by workflow execution id. The idea is that when a workflow execution is running we must be getting the same workflow specification even if :param wf_ex_id: Workflow execution id. :return: Workflow specification. """ if not wf_ex_id: return None wf_ex = db_api.get_workflow_execution(wf_ex_id) return get_workflow_spec(wf_ex.spec) @cachetools.cached(_WF_DEF_CACHE, lock=_WF_DEF_CACHE_LOCK) def get_workflow_spec_by_definition_id(wf_def_id, wf_def_updated_at): """Gets specification by workflow definition id and its 'updated_at'. The idea of this method is to return a cached specification for the given workflow id and workflow definition 'updated_at'. As long as the given workflow definition remains the same in DB users of this method will be getting a cached value. Once the workflow definition has changed clients will be providing a different 'updated_at' value and hence this method will be called and spec is updated for this combination of parameters. Old cached values will be kicked out by LRU algorithm if the cache runs out of space. :param wf_def_id: Workflow definition id. :param wf_def_updated_at: Workflow definition 'updated_at' value. It serves only as part of cache key and is not explicitly used in the method. :return: Workflow specification. """ if not wf_def_id: return None wf_def = db_api.get_workflow_definition(wf_def_id) return get_workflow_spec(wf_def.spec) def cache_workflow_spec_by_execution_id(wf_ex_id, wf_spec): with _WF_EX_CACHE_LOCK: _WF_EX_CACHE[cachetools.keys.hashkey(wf_ex_id)] = wf_spec def get_wf_execution_spec_cache_size(): return len(_WF_EX_CACHE) def get_wf_definition_spec_cache_size(): return len(_WF_DEF_CACHE) def clear_caches(): """Clears all specification caches.""" with _WF_EX_CACHE_LOCK: _WF_EX_CACHE.clear() with _WF_DEF_CACHE_LOCK: _WF_DEF_CACHE.clear() mistral-6.0.0/mistral/lang/types.py0000666000175100017510000000544013245513261017335 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral import expressions NONEMPTY_STRING = { "type": "string", "minLength": 1 } UNIQUE_STRING_LIST = { "type": "array", "items": NONEMPTY_STRING, "uniqueItems": True, "minItems": 1 } POSITIVE_INTEGER = { "type": "integer", "minimum": 0 } POSITIVE_NUMBER = { "type": "number", "minimum": 0.0 } EXPRESSION = { "oneOf": [{ "type": "string", "pattern": "^%s\\s*$" % expressions.patterns[name] } for name in expressions.patterns] } EXPRESSION_CONDITION = { "type": "object", "minProperties": 1, "patternProperties": { "^\w+$": EXPRESSION } } ANY = { "anyOf": [ {"type": "array"}, {"type": "boolean"}, {"type": "integer"}, {"type": "number"}, {"type": "object"}, {"type": "string"} ] } ANY_NULLABLE = { "anyOf": [ {"type": "null"}, {"type": "array"}, {"type": "boolean"}, {"type": "integer"}, {"type": "number"}, {"type": "object"}, {"type": "string"} ] } NONEMPTY_DICT = { "type": "object", "minProperties": 1, "patternProperties": { "^\w+$": ANY_NULLABLE } } ONE_KEY_DICT = { "type": "object", "minProperties": 1, "maxProperties": 1, "patternProperties": { "^\w+$": ANY_NULLABLE } } STRING_OR_EXPRESSION_CONDITION = { "oneOf": [ NONEMPTY_STRING, EXPRESSION_CONDITION ] } EXPRESSION_OR_POSITIVE_INTEGER = { "oneOf": [ EXPRESSION, POSITIVE_INTEGER ] } EXPRESSION_OR_BOOLEAN = { "oneOf": [ EXPRESSION, {"type": "boolean"} ] } UNIQUE_STRING_OR_EXPRESSION_CONDITION_LIST = { "type": "array", "items": STRING_OR_EXPRESSION_CONDITION, "uniqueItems": True, "minItems": 1 } VERSION = { "anyOf": [ NONEMPTY_STRING, POSITIVE_INTEGER, POSITIVE_NUMBER ] } WORKFLOW_TYPE = { "enum": ["reverse", "direct"] } STRING_OR_ONE_KEY_DICT = { "oneOf": [ NONEMPTY_STRING, ONE_KEY_DICT ] } UNIQUE_STRING_OR_ONE_KEY_DICT_LIST = { "type": "array", "items": STRING_OR_ONE_KEY_DICT, "uniqueItems": True, "minItems": 1 } mistral-6.0.0/mistral/exceptions.py0000666000175100017510000001321513245513261017430 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # TODO(rakhmerov): Can we make one parent for errors and exceptions? class MistralError(Exception): """Mistral specific error. Reserved for situations that can't be automatically handled. When it occurs it signals that there is a major environmental problem like invalid startup configuration or implementation problem (e.g. some code doesn't take care of certain corner cases). From architectural perspective it's pointless to try to handle this type of problems except doing some finalization work like transaction rollback, deleting temporary files etc. """ message = "An unknown error occurred" http_code = 500 def __init__(self, message=None): if message is not None: self.message = message super(MistralError, self).__init__( '%d: %s' % (self.http_code, self.message)) @property def code(self): """This is here for webob to read. https://github.com/Pylons/webob/blob/master/webob/exc.py """ return self.http_code def __str__(self): return self.message class MistralException(Exception): """Mistral specific exception. Reserved for situations that are not critical for program continuation. It is possible to recover from this type of problems automatically and continue program execution. Such problems may be related with invalid user input (such as invalid syntax) or temporary environmental problems. In case if an instance of a certain exception type bubbles up to API layer then this type of exception it must be associated with an http code so it's clear how to represent it for a client. To correctly use this class, inherit from it and define a 'message' and 'http_code' properties. """ message = "An unknown exception occurred" http_code = 500 def __init__(self, message=None): if message is not None: self.message = message super(MistralException, self).__init__( '%d: %s' % (self.http_code, self.message)) @property def code(self): """This is here for webob to read. https://github.com/Pylons/webob/blob/master/webob/exc.py """ return self.http_code def __str__(self): return self.message # Database errors. class DBError(MistralError): http_code = 400 class DBDuplicateEntryError(DBError): http_code = 409 message = "Database object already exists" class DBEntityNotFoundError(DBError): http_code = 404 message = "Object not found" # DSL exceptions. class DSLParsingException(MistralException): http_code = 400 class ExpressionGrammarException(DSLParsingException): http_code = 400 class JinjaGrammarException(ExpressionGrammarException): message = "Invalid grammar of Jinja expression" class YaqlGrammarException(ExpressionGrammarException): message = "Invalid grammar of YAQL expression" class InvalidModelException(DSLParsingException): http_code = 400 message = "Wrong entity definition" # Various common exceptions and errors. class EvaluationException(MistralException): http_code = 400 class JinjaEvaluationException(EvaluationException): message = "Can not evaluate Jinja expression" class YaqlEvaluationException(EvaluationException): message = "Can not evaluate YAQL expression" class DataAccessException(MistralException): http_code = 400 class ActionException(MistralException): http_code = 400 class InvalidActionException(MistralException): http_code = 400 class ActionRegistrationException(MistralException): message = "Failed to register action" class EngineException(MistralException): http_code = 500 class WorkflowException(MistralException): http_code = 400 class EventTriggerException(MistralException): http_code = 400 class InputException(MistralException): http_code = 400 class ApplicationContextNotFoundException(MistralException): http_code = 400 message = "Application context not found" class InvalidResultException(MistralException): http_code = 400 message = "Unable to parse result" class SizeLimitExceededException(MistralException): http_code = 400 def __init__(self, field_name, size_kb, size_limit_kb): super(SizeLimitExceededException, self).__init__( "Size of '%s' is %dKB which exceeds the limit of %dKB" % (field_name, size_kb, size_limit_kb)) class CoordinationException(MistralException): http_code = 500 class NotAllowedException(MistralException): http_code = 403 message = "Operation not allowed" class UnauthorizedException(MistralException): http_code = 401 message = "Unauthorized" class KombuException(Exception): def __init__(self, e): super(KombuException, self).__init__(e) self.exc_type = e.__class__.__name__ self.value = str(e) class InvalidStateTransitionException(MistralException): http_code = 400 message = 'Invalid state transition' mistral-6.0.0/mistral/cmd/0000775000175100017510000000000013245513604015435 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/cmd/__init__.py0000666000175100017510000000000013245513261017535 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/cmd/launch.py0000666000175100017510000001626213245513272017273 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys import eventlet eventlet.monkey_patch( os=True, select=True, socket=True, thread=False if '--use-debugger' in sys.argv else True, time=True) import os # If ../mistral/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... POSSIBLE_TOPDIR = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(POSSIBLE_TOPDIR, 'mistral', '__init__.py')): sys.path.insert(0, POSSIBLE_TOPDIR) from oslo_config import cfg from oslo_log import log as logging from oslo_service import service from mistral.api import service as api_service from mistral import config from mistral.engine import engine_server from mistral.event_engine import event_engine_server from mistral.executors import executor_server from mistral.rpc import base as rpc from mistral import version CONF = cfg.CONF SERVER_THREAD_MANAGER = None SERVER_PROCESS_MANAGER = None def launch_thread(server, workers=1): try: global SERVER_THREAD_MANAGER if not SERVER_THREAD_MANAGER: SERVER_THREAD_MANAGER = service.ServiceLauncher(CONF) SERVER_THREAD_MANAGER.launch_service(server, workers=workers) except Exception as e: sys.stderr.write("ERROR: %s\n" % e) sys.exit(1) def launch_process(server, workers=1): try: global SERVER_PROCESS_MANAGER if not SERVER_PROCESS_MANAGER: SERVER_PROCESS_MANAGER = service.ProcessLauncher(CONF) SERVER_PROCESS_MANAGER.launch_service(server, workers=workers) except Exception as e: sys.stderr.write("ERROR: %s\n" % e) sys.exit(1) def launch_executor(): launch_thread(executor_server.get_oslo_service()) def launch_engine(): launch_thread(engine_server.get_oslo_service()) def launch_event_engine(): launch_thread(event_engine_server.get_oslo_service()) def launch_api(): server = api_service.WSGIService('mistral_api') launch_process(server, workers=server.workers) def launch_any(options): for option in options: LAUNCH_OPTIONS[option]() global SERVER_PROCESS_MANAGER global SERVER_THREAD_MANAGER if SERVER_PROCESS_MANAGER: SERVER_PROCESS_MANAGER.wait() if SERVER_THREAD_MANAGER: SERVER_THREAD_MANAGER.wait() # Map cli options to appropriate functions. The cli options are # registered in mistral's config.py. LAUNCH_OPTIONS = { 'api': launch_api, 'engine': launch_engine, 'executor': launch_executor, 'event-engine': launch_event_engine } MISTRAL_TITLE = """ |\\\ //| || || ||\\\ //|| __ || __ __ || || \\\// || || // |||||| || \\\ // \\\ || || \\/ || \\\ || || || \\\ || || || || \\\ || || || /\\\ || || || || __// ||_// || \\\__// \\\_ || Mistral Workflow Service, version %s """ % version.version_string() def print_server_info(): print(MISTRAL_TITLE) comp_str = ("[%s]" % ','.join(LAUNCH_OPTIONS) if cfg.CONF.server == ['all'] else cfg.CONF.server) print('Launching server components %s...' % comp_str) def get_properly_ordered_parameters(): """Orders launch parameters in the right order. In oslo it's important the order of the launch parameters. if --config-file came after the command line parameters the command line parameters are ignored. So to make user command line parameters are never ignored this method moves --config-file to be always first. """ args = sys.argv[1:] for arg in sys.argv[1:]: if arg == '--config-file' or arg.startswith('--config-file='): if "=" in arg: conf_file_value = arg.split("=", 1)[1] else: conf_file_value = args[args.index(arg) + 1] args.remove(conf_file_value) args.remove(arg) args.insert(0, "--config-file") args.insert(1, conf_file_value) return args def main(): try: config.parse_args(get_properly_ordered_parameters()) print_server_info() logging.setup(CONF, 'Mistral') # Please refer to the oslo.messaging documentation for transport # configuration. The default transport for oslo.messaging is # rabbitMQ. The available transport drivers are listed in the # setup.cfg file in oslo.messaging under the entry_points section for # oslo.messaging.drivers. The transport driver is specified using the # rpc_backend option in the default section of the oslo configuration # file. The expected value for the rpc_backend is one of the key # values available for the oslo.messaging.drivers (i.e. rabbit, fake). # There are additional options such as ssl and credential that can be # specified depending on the driver. Please refer to the driver # implementation for those additional options. It's important to note # that the "fake" transport should only be used if "all" the Mistral # servers are launched on the same process. Otherwise, messages do not # get delivered if the Mistral servers are launched on different # processes because the "fake" transport is using an in process queue. rpc.get_transport() if cfg.CONF.server == ['all']: # Launch all servers. launch_any(LAUNCH_OPTIONS.keys()) else: # Validate launch option. if set(cfg.CONF.server) - set(LAUNCH_OPTIONS.keys()): raise Exception('Valid options are all or any combination of ' ', '.join(LAUNCH_OPTIONS.keys())) # Launch distinct set of server(s). launch_any(set(cfg.CONF.server)) except RuntimeError as excp: sys.stderr.write("ERROR: %s\n" % excp) sys.exit(1) # Helper method used in unit tests to reset the service launchers. def reset_server_managers(): global SERVER_THREAD_MANAGER global SERVER_PROCESS_MANAGER SERVER_THREAD_MANAGER = None SERVER_PROCESS_MANAGER = None # Helper method used in unit tests to access the service launcher. def get_server_thread_manager(): global SERVER_THREAD_MANAGER return SERVER_THREAD_MANAGER # Helper method used in unit tests to access the process launcher. def get_server_process_manager(): global SERVER_PROCESS_MANAGER return SERVER_PROCESS_MANAGER if __name__ == '__main__': sys.exit(main()) mistral-6.0.0/mistral/policies/0000775000175100017510000000000013245513604016501 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/policies/action_executions.py0000666000175100017510000000456113245513261022605 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base ACTION_EXECUTIONS = 'action_executions:%s' rules = [ policy.DocumentedRuleDefault( name=ACTION_EXECUTIONS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Create new action execution.', operations=[ { 'path': '/v2/action_executions', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=ACTION_EXECUTIONS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete the specified action execution.', operations=[ { 'path': '/v2/action_executions', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=ACTION_EXECUTIONS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the specified action execution.', operations=[ { 'path': '/v2/action_executions/{action_execution_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=ACTION_EXECUTIONS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all tasks within the execution.', operations=[ { 'path': '/v2/action_executions', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=ACTION_EXECUTIONS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Update the specified action execution.', operations=[ { 'path': '/v2/action_executions', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/action.py0000666000175100017510000000425513245513261020337 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base ACTIONS = 'actions:%s' rules = [ policy.DocumentedRuleDefault( name=ACTIONS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Create a new action.', operations=[ { 'path': '/v2/actions', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=ACTIONS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete the named action.', operations=[ { 'path': '/v2/actions', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=ACTIONS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the named action.', operations=[ { 'path': '/v2/actions/{action_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=ACTIONS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all actions.', operations=[ { 'path': '/v2/actions', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=ACTIONS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Update one or more actions.', operations=[ { 'path': '/v2/actions', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/execution.py0000666000175100017510000000506513245513261021065 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base EXECUTIONS = 'executions:%s' rules = [ policy.DocumentedRuleDefault( name=EXECUTIONS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Create a new execution.', operations=[ { 'path': '/v2/executions', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=EXECUTIONS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete the specified execution.', operations=[ { 'path': '/v2/executions/{execution_id}', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=EXECUTIONS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the specified execution.', operations=[ { 'path': '/v2/executions/{execution_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=EXECUTIONS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all executions.', operations=[ { 'path': '/v2/executions', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=EXECUTIONS % 'list:all_projects', check_str=base.RULE_ADMIN_ONLY, description='Return all executions from all projects.', operations=[ { 'path': '/v2/executions', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=EXECUTIONS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Update an execution.', operations=[ { 'path': '/v2/executions', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/base.py0000666000175100017510000000165213245513261017772 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy RULE_ADMIN_OR_OWNER = 'rule:admin_or_owner' RULE_ADMIN_ONLY = 'rule:admin_only' rules = [ policy.RuleDefault( "admin_only", "is_admin:True"), policy.RuleDefault( "admin_or_owner", "is_admin:True or project_id:%(project_id)s") ] def list_rules(): return rules mistral-6.0.0/mistral/policies/__init__.py0000666000175100017510000000275313245513261020622 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools from mistral.policies import action from mistral.policies import action_executions from mistral.policies import base from mistral.policies import cron_trigger from mistral.policies import environment from mistral.policies import event_trigger from mistral.policies import execution from mistral.policies import member from mistral.policies import service from mistral.policies import task from mistral.policies import workbook from mistral.policies import workflow def list_rules(): return itertools.chain( action.list_rules(), action_executions.list_rules(), base.list_rules(), cron_trigger.list_rules(), environment.list_rules(), event_trigger.list_rules(), execution.list_rules(), member.list_rules(), service.list_rules(), task.list_rules(), workbook.list_rules(), workflow.list_rules() ) mistral-6.0.0/mistral/policies/environment.py0000666000175100017510000000442213245513261021422 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base ENVIRONMENTS = 'environments:%s' rules = [ policy.DocumentedRuleDefault( name=ENVIRONMENTS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Create a new environment.', operations=[ { 'path': '/v2/environments', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=ENVIRONMENTS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete the named environment.', operations=[ { 'path': '/v2/environments/{environment_name}', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=ENVIRONMENTS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the named environment.', operations=[ { 'path': '/v2/environments/{environment_name}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=ENVIRONMENTS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all environments.', operations=[ { 'path': '/v2/environments', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=ENVIRONMENTS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Update an environment.', operations=[ { 'path': '/v2/environments', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/cron_trigger.py0000666000175100017510000000444013245513261021542 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base CRON_TRIGGERS = 'cron_triggers:%s' rules = [ policy.DocumentedRuleDefault( name=CRON_TRIGGERS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Creates a new cron trigger.', operations=[ { 'path': '/v2/cron_triggers', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=CRON_TRIGGERS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete cron trigger.', operations=[ { 'path': '/v2/cron_triggers', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=CRON_TRIGGERS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Returns the named cron trigger.', operations=[ { 'path': '/v2/cron_triggers/{cron_trigger_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=CRON_TRIGGERS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all cron triggers.', operations=[ { 'path': '/v2/cron_triggers', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=CRON_TRIGGERS % 'list:all_projects', check_str=base.RULE_ADMIN_ONLY, description='Return all cron triggers of all projects.', operations=[ { 'path': '/v2/cron_triggers', 'method': 'GET' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/workflow.py0000666000175100017510000000502713245513261020732 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base WORKFLOWS = 'workflows:%s' rules = [ policy.DocumentedRuleDefault( name=WORKFLOWS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Create a new workflow.', operations=[ { 'path': '/v2/workflows', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=WORKFLOWS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete a workflow.', operations=[ { 'path': '/v2/workflows', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=WORKFLOWS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the named workflow.', operations=[ { 'path': '/v2/workflows/{workflow_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=WORKFLOWS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return a list of workflows.', operations=[ { 'path': '/v2/workflows', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=WORKFLOWS % 'list:all_projects', check_str=base.RULE_ADMIN_ONLY, description='Return a list of workflows from all projects.', operations=[ { 'path': '/v2/workflows', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=WORKFLOWS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Update one or more workflows.', operations=[ { 'path': '/v2/workflows', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/workbook.py0000666000175100017510000000431113245513261020710 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base WORKBOOKS = 'workbooks:%s' rules = [ policy.DocumentedRuleDefault( name=WORKBOOKS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Create a new workbook.', operations=[ { 'path': '/v2/workbooks', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=WORKBOOKS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete the named workbook.', operations=[ { 'path': '/v2/workbooks', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=WORKBOOKS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the named workbook.', operations=[ { 'path': '/v2/workbooks/{workbook_name}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=WORKBOOKS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all workbooks.', operations=[ { 'path': '/v2/workbooks', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=WORKBOOKS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Update an workbook.', operations=[ { 'path': '/v2/workbooks', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/event_trigger.py0000666000175100017510000000607213245513261021725 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base EVENT_TRIGGERS = 'event_triggers:%s' # NOTE(hieulq): all API operations of below rules are not documented in API # reference docs yet. rules = [ policy.DocumentedRuleDefault( name=EVENT_TRIGGERS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Create a new event trigger.', operations=[ { 'path': '/v2/event_triggers', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=EVENT_TRIGGERS % 'create:public', check_str=base.RULE_ADMIN_ONLY, description='Create a new event trigger for public usage.', operations=[ { 'path': '/v2/event_triggers', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=EVENT_TRIGGERS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Delete event trigger.', operations=[ { 'path': '/v2/event_triggers/{event_trigger_id}', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=EVENT_TRIGGERS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Returns the specified event trigger.', operations=[ { 'path': '/v2/event_triggers/{event_trigger_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=EVENT_TRIGGERS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all event triggers.', operations=[ { 'path': '/v2/event_triggers', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=EVENT_TRIGGERS % 'list:all_projects', check_str=base.RULE_ADMIN_ONLY, description='Return all event triggers from all projects.', operations=[ { 'path': '/v2/event_triggers', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=EVENT_TRIGGERS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Updates an existing event trigger.', operations=[ { 'path': '/v2/event_triggers', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/service.py0000666000175100017510000000201613245513261020513 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base SERVICES = 'services:%s' rules = [ policy.DocumentedRuleDefault( name=SERVICES % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all Mistral services.', operations=[ { 'path': '/v2/services', 'method': 'GET' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/member.py0000666000175100017510000000461213245513261020326 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base MEMBERS = 'members:%s' # NOTE(hieulq): all API operations of below rules are not documented in API # reference docs yet. rules = [ policy.DocumentedRuleDefault( name=MEMBERS % 'create', check_str=base.RULE_ADMIN_OR_OWNER, description='Shares the resource to a new member.', operations=[ { 'path': '/v2/members', 'method': 'POST' } ] ), policy.DocumentedRuleDefault( name=MEMBERS % 'delete', check_str=base.RULE_ADMIN_OR_OWNER, description='Deletes a member from the member list of a resource.', operations=[ { 'path': '/v2/members', 'method': 'DELETE' } ] ), policy.DocumentedRuleDefault( name=MEMBERS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Shows resource member details.', operations=[ { 'path': '/v2/members/{member_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=MEMBERS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all members with whom the resource has been ' 'shared.', operations=[ { 'path': '/v2/members', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=MEMBERS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Sets the status for a resource member.', operations=[ { 'path': '/v2/members', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/policies/task.py0000666000175100017510000000312413245513261020016 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_policy import policy from mistral.policies import base TASKS = 'tasks:%s' rules = [ policy.DocumentedRuleDefault( name=TASKS % 'get', check_str=base.RULE_ADMIN_OR_OWNER, description='Return the specified task.', operations=[ { 'path': '/v2/tasks/{task_id}', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=TASKS % 'list', check_str=base.RULE_ADMIN_OR_OWNER, description='Return all tasks.', operations=[ { 'path': '/v2/tasks', 'method': 'GET' } ] ), policy.DocumentedRuleDefault( name=TASKS % 'update', check_str=base.RULE_ADMIN_OR_OWNER, description='Update the specified task execution.', operations=[ { 'path': '/v2/tasks', 'method': 'PUT' } ] ) ] def list_rules(): return rules mistral-6.0.0/mistral/event_engine/0000775000175100017510000000000013245513604017340 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/event_engine/event_engine_server.py0000666000175100017510000000572713245513261023762 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks # Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from mistral import config as cfg from mistral.event_engine import default_event_engine as evt_eng from mistral.rpc import base as rpc from mistral.service import base as service_base from mistral.utils import profiler as profiler_utils LOG = logging.getLogger(__name__) class EventEngineServer(service_base.MistralService): """RPC EventEngine server. This class manages event engine life-cycle and gets registered as an RPC endpoint to process event engine specific calls. It also registers a cluster member associated with this instance of event engine. """ def __init__(self, event_engine): super(EventEngineServer, self).__init__('event_engine_group') self._event_engine = event_engine self._rpc_server = None def start(self): super(EventEngineServer, self).start() profiler_utils.setup( 'mistral-event-engine', cfg.CONF.event_engine.host ) # Initialize and start RPC server. self._rpc_server = rpc.get_rpc_server_driver()(cfg.CONF.event_engine) self._rpc_server.register_endpoint(self) self._rpc_server.run() self._notify_started('Event engine server started.') def stop(self, graceful=False): super(EventEngineServer, self).stop(graceful) if self._rpc_server: self._rpc_server.stop(graceful) def create_event_trigger(self, rpc_ctx, trigger, events): LOG.info( "Received RPC request 'create_event_trigger'[rpc_ctx=%s," " trigger=%s, events=%s", rpc_ctx, trigger, events ) return self._event_engine.create_event_trigger(trigger, events) def delete_event_trigger(self, rpc_ctx, trigger, events): LOG.info( "Received RPC request 'delete_event_trigger'[rpc_ctx=%s," " trigger=%s, events=%s", rpc_ctx, trigger, events ) return self._event_engine.delete_event_trigger(trigger, events) def update_event_trigger(self, rpc_ctx, trigger): LOG.info( "Received RPC request 'update_event_trigger'[rpc_ctx=%s," " trigger=%s", rpc_ctx, trigger ) return self._event_engine.update_event_trigger(trigger) def get_oslo_service(): return EventEngineServer(evt_eng.DefaultEventEngine()) mistral-6.0.0/mistral/event_engine/base.py0000666000175100017510000000214313245513261020625 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import six @six.add_metaclass(abc.ABCMeta) class EventEngine(object): """Action event trigger interface.""" @abc.abstractmethod def create_event_trigger(self, trigger, events): raise NotImplementedError() @abc.abstractmethod def update_event_trigger(self, trigger): raise NotImplementedError() @abc.abstractmethod def delete_event_trigger(self, trigger, events): raise NotImplementedError() mistral-6.0.0/mistral/event_engine/default_event_engine.py0000666000175100017510000003356513245513261024101 0ustar zuulzuul00000000000000# Copyright 2016 Catalyst IT Ltd # Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import defaultdict import json import os import threading from oslo_config import cfg from oslo_log import log as logging from oslo_service import threadgroup from oslo_utils import fnmatch import six import yaml from mistral import context as auth_ctx from mistral.db.v2 import api as db_api from mistral.event_engine import base from mistral import exceptions from mistral import expressions from mistral import messaging as mistral_messaging from mistral.rpc import clients as rpc from mistral.services import security LOG = logging.getLogger(__name__) CONF = cfg.CONF DEFAULT_PROPERTIES = { 'service': '<% $.publisher %>', 'project_id': '<% $.context.project_id %>', 'user_id': '<% $.context.user_id %>', 'timestamp': '<% $.timestamp %>' } class EventDefinition(object): def __init__(self, definition_cfg): self.cfg = definition_cfg try: self.event_types = self.cfg['event_types'] self.properties = self.cfg['properties'] except KeyError as err: raise exceptions.MistralException( "Required field %s not specified" % err.args[0] ) if isinstance(self.event_types, six.string_types): self.event_types = [self.event_types] def match_type(self, event_type): for t in self.event_types: if fnmatch.fnmatch(event_type, t): return True return False def convert(self, event): return expressions.evaluate_recursively(self.properties, event) class NotificationsConverter(object): def __init__(self): config_file = CONF.event_engine.event_definitions_cfg_file definition_cfg = [] if os.path.exists(config_file): with open(config_file) as cf: config = cf.read() try: definition_cfg = yaml.safe_load(config) except yaml.YAMLError as err: if hasattr(err, 'problem_mark'): mark = err.problem_mark errmsg = ( "Invalid YAML syntax in Definitions file " "%(file)s at line: %(line)s, column: %(column)s." % dict(file=config_file, line=mark.line + 1, column=mark.column + 1) ) else: errmsg = ( "YAML error reading Definitions file %s" % CONF.event_engine.event_definitions_cfg_file ) LOG.error(errmsg) raise exceptions.MistralError( 'Invalid event definition configuration file. %s' % config_file ) self.definitions = [EventDefinition(event_def) for event_def in reversed(definition_cfg)] def get_event_definition(self, event_type): for d in self.definitions: if d.match_type(event_type): return d return None def convert(self, event_type, event): edef = self.get_event_definition(event_type) if edef is None: LOG.debug('No event definition found for type: %s, use default ' 'settings instead.', event_type) return expressions.evaluate_recursively(DEFAULT_PROPERTIES, event) return edef.convert(event) class DefaultEventEngine(base.EventEngine): """Event engine server. A separate service that is responsible for listening event notification and triggering workflows defined by end user. """ def __init__(self): self.engine_client = rpc.get_engine_client() self.event_queue = six.moves.queue.Queue() self.handler_tg = threadgroup.ThreadGroup() self.event_triggers_map = defaultdict(list) self.exchange_topic_events_map = defaultdict(set) self.exchange_topic_listener_map = {} self.lock = threading.Lock() LOG.debug('Loading notification definitions.') self.notification_converter = NotificationsConverter() self._start_handler() self._start_listeners() def _get_endpoint_cls(self, events): """Create a messaging endpoint class. The endpoint implements the method named like the priority, and only handle the notification match the NotificationFilter rule set into the filter_rule attribute of the endpoint. """ # Handle each priority of notification messages. event_priorities = ['audit', 'critical', 'debug', 'error', 'info'] attrs = dict.fromkeys( event_priorities, mistral_messaging.handle_event ) attrs['event_types'] = events endpoint_cls = type( 'MistralNotificationEndpoint', (mistral_messaging.NotificationEndpoint,), attrs, ) return endpoint_cls def _add_event_listener(self, exchange, topic, events): """Add or update event listener for specified exchange, topic. Create a new event listener for the event trigger if no existing listener relates to (exchange, topic). Or, restart existing event listener with updated events. """ key = (exchange, topic) if key in self.exchange_topic_listener_map: listener = self.exchange_topic_listener_map[key] listener.stop() listener.wait() endpoint = self._get_endpoint_cls(events)(self) LOG.debug("Starting to listen to AMQP. exchange: %s, topic: %s", exchange, topic) listener = mistral_messaging.start_listener( CONF, exchange, topic, [endpoint] ) self.exchange_topic_listener_map[key] = listener def stop_all_listeners(self): for listener in six.itervalues(self.exchange_topic_listener_map): listener.stop() listener.wait() def _start_listeners(self): triggers = db_api.get_event_triggers(insecure=True) LOG.info('Found %s event triggers.', len(triggers)) for trigger in triggers: exchange_topic = (trigger.exchange, trigger.topic) self.exchange_topic_events_map[exchange_topic].add(trigger.event) trigger_info = trigger.to_dict() trigger_info['workflow_namespace'] = trigger.workflow.namespace self.event_triggers_map[trigger.event].append(trigger_info) for (ex_t, events) in self.exchange_topic_events_map.items(): exchange, topic = ex_t self._add_event_listener(exchange, topic, events) def _start_workflow(self, triggers, event_params): """Start workflows defined in event triggers.""" for t in triggers: LOG.info('Start to process event trigger: %s', t['id']) workflow_params = t.get('workflow_params', {}) workflow_params.update({'event_params': event_params}) # Setup context before schedule triggers. ctx = security.create_context(t['trust_id'], t['project_id']) auth_ctx.set_ctx(ctx) description = { "description": ( "Workflow execution created by event" " trigger '(%s)'." % t['id'] ), "triggered_by": { "type": "event_trigger", "id": t['id'], "name": t['name'] } } try: self.engine_client.start_workflow( t['workflow_id'], t['workflow_namespace'], t['workflow_input'], description=json.dumps(description), **workflow_params ) except Exception as e: LOG.exception("Failed to process event trigger %s, " "error: %s", t['id'], str(e)) finally: auth_ctx.set_ctx(None) def _process_event_queue(self, *args, **kwargs): """Process notification events. This function is called in a thread. """ while True: event = self.event_queue.get() context = event.get('context') event_type = event.get('event_type') # NOTE(kong): Use lock here to protect event_triggers_map variable # from being updated outside the thread. with self.lock: if event_type in self.event_triggers_map: triggers = self.event_triggers_map[event_type] # There may be more projects registered the same event. project_ids = [t['project_id'] for t in triggers] any_public = any( [t['scope'] == 'public' for t in triggers] ) # Skip the event doesn't belong to any event trigger owner. if (not any_public and CONF.pecan.auth_enable and context.get('project_id', '') not in project_ids): self.event_queue.task_done() continue # Need to choose what trigger(s) should be called exactly. triggers_to_call = [] for t in triggers: project_trigger = ( t['project_id'] == context.get('project_id') ) public_trigger = t['scope'] == 'public' if project_trigger or public_trigger: triggers_to_call.append(t) LOG.debug('Start to handle event: %s, %d trigger(s) ' 'registered.', event_type, len(triggers)) event_params = self.notification_converter.convert( event_type, event ) self._start_workflow(triggers_to_call, event_params) self.event_queue.task_done() def _start_handler(self): """Starts event queue handler in a thread group.""" LOG.info('Starting event notification task...') self.handler_tg.add_thread(self._process_event_queue) def process_notification_event(self, notification): """Callback funtion by event handler. Just put notification into a queue. """ LOG.debug("Putting notification event to event queue.") self.event_queue.put(notification) def create_event_trigger(self, trigger, events): """An endpoint method for creating event trigger. When creating an event trigger in API layer, we need to create a new listener or update an existing listener. :param trigger: a dict containing event trigger information. :param events: a list of events binding to the (exchange, topic) of the event trigger. """ with self.lock: ids = [t['id'] for t in self.event_triggers_map[trigger['event']]] if trigger['id'] not in ids: self.event_triggers_map[trigger['event']].append(trigger) self._add_event_listener(trigger['exchange'], trigger['topic'], events) def update_event_trigger(self, trigger): """An endpoint method for updating event trigger. Because only workflow related information is allowed to be updated, we only need to update event_triggers_map(in a synchronous way). :param trigger: a dict containing event trigger information. """ assert trigger['event'] in self.event_triggers_map with self.lock: for t in self.event_triggers_map[trigger['event']]: if trigger['id'] == t['id']: t.update(trigger) def delete_event_trigger(self, trigger, events): """An endpoint method for deleting event trigger. If there is no event binding to (exchange, topic) after deletion, we need to delete the related listener. Otherwise, we need to restart that listener. :param trigger: a dict containing event trigger information. :param events: a list of events binding to the (exchange, topic) of the event trigger. """ assert trigger['event'] in self.event_triggers_map with self.lock: for t in self.event_triggers_map[trigger['event']]: if t['id'] == trigger['id']: self.event_triggers_map[trigger['event']].remove(t) break if not self.event_triggers_map[trigger['event']]: del self.event_triggers_map[trigger['event']] if not events: key = (trigger['exchange'], trigger['topic']) listener = self.exchange_topic_listener_map[key] listener.stop() listener.wait() del self.exchange_topic_listener_map[key] LOG.info( 'Deleted listener for exchange: %s, topic: %s', trigger['exchange'], trigger['topic'] ) return security.delete_trust(trigger['trust_id']) self._add_event_listener(trigger['exchange'], trigger['topic'], events) mistral-6.0.0/mistral/event_engine/__init__.py0000666000175100017510000000000013245513261021440 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/messaging.py0000666000175100017510000000736713245513261017237 0ustar zuulzuul00000000000000# Copyright 2016 - IBM Corp. # Copyright 2016 Catalyst IT Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module contains common structures and functions that help to handle AMQP messages based on olso.messaging framework. """ import abc from oslo_config import cfg from oslo_log import log as logging import oslo_messaging from oslo_messaging.notify import dispatcher from oslo_messaging.notify import listener from oslo_messaging import target from oslo_messaging import transport from oslo_utils import timeutils import six LOG = logging.getLogger(__name__) CONF = cfg.CONF def handle_event(self, ctxt, publisher_id, event_type, payload, metadata): """Callback function of each priority of notification messages. The function is used to construct endpoint class dynamically when starting listener in event engine service. After the class is created, 'self' param will make sense. :param ctxt: the notification context dict :param publisher_id: always describes where notification is sent from, for example: 'compute.host1' :param event_type: describes the event, for example: 'compute.create_instance' :param payload: the notification payload :param metadata: the notification metadata, is always a mapping containing a unique message_id and a timestamp. """ LOG.debug('Received notification. publisher_id: %s, event_type: %s, ' 'payload: %s, metadata: %s.', publisher_id, event_type, payload, metadata) notification = { 'event_type': event_type, 'payload': payload, 'publisher': publisher_id, 'timestamp': metadata.get('timestamp', ctxt.get('timestamp', timeutils.utcnow())), 'context': ctxt } self.event_engine.process_notification_event(notification) return dispatcher.NotificationResult.HANDLED @six.add_metaclass(abc.ABCMeta) class NotificationEndpoint(object): """Message listener endpoint. Only handle notifications that match the NotificationFilter rule set into the filter_rule attribute of the endpoint. """ event_types = [] def __init__(self, event_engine): self.event_engine = event_engine self.filter_rule = oslo_messaging.NotificationFilter( event_type='|'.join(self.event_types)) def get_pool_name(exchange): """Get pool name. Get the pool name for the listener, it will be formatted as 'mistral-exchange-hostname' :param exchange: exchange name """ pool_host = CONF.event_engine.listener_pool_name pool_name = 'mistral-%s-%s' % (exchange, pool_host) LOG.debug("Listener pool name is %s", pool_name) return pool_name def start_listener(conf, exchange, topic, endpoints): """Starts up a notification listener.""" trans = transport.get_transport(conf) targets = [target.Target(exchange=exchange, topic=topic)] pool_name = get_pool_name(exchange) notification_listener = listener.get_notification_listener( trans, targets, endpoints, executor='threading', allow_requeue=False, pool=pool_name ) notification_listener.start() return notification_listener mistral-6.0.0/mistral/__init__.py0000666000175100017510000000000013245513261016772 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/_i18n.py0000666000175100017510000000150513245513261016164 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """oslo.i18n integration module. See https://docs.openstack.org/oslo.i18n/latest/user/usage.html """ import oslo_i18n DOMAIN = 'mistral' _translators = oslo_i18n.TranslatorFactory(domain=DOMAIN) # The primary translation function using the well-known name "_" _ = _translators.primary mistral-6.0.0/mistral/services/0000775000175100017510000000000013245513604016515 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/services/actions.py0000666000175100017510000000761213245513261020536 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.lang import parser as spec_parser def create_actions(definition, scope='private'): action_list_spec = spec_parser.get_action_list_spec_from_yaml(definition) db_actions = [] for action_spec in action_list_spec.get_actions(): db_actions.append(create_action(action_spec, definition, scope)) return db_actions def update_actions(definition, scope='private', identifier=None): action_list_spec = spec_parser.get_action_list_spec_from_yaml(definition) actions = action_list_spec.get_actions() if identifier and len(actions) > 1: raise exc.InputException( "More than one actions are not supported for " "update with identifier. [identifier: %s]" % identifier ) db_actions = [] for action_spec in action_list_spec.get_actions(): db_actions.append(update_action( action_spec, definition, scope, identifier=identifier )) return db_actions def create_or_update_actions(definition, scope='private'): action_list_spec = spec_parser.get_action_list_spec_from_yaml(definition) db_actions = [] for action_spec in action_list_spec.get_actions(): db_actions.append( create_or_update_action(action_spec, definition, scope) ) return db_actions def create_action(action_spec, definition, scope): return db_api.create_action_definition( _get_action_values(action_spec, definition, scope) ) def update_action(action_spec, definition, scope, identifier=None): action = db_api.load_action_definition(action_spec.get_name()) if action and action.is_system: raise exc.InvalidActionException( "Attempt to modify a system action: %s" % action.name ) values = _get_action_values(action_spec, definition, scope) return db_api.update_action_definition( identifier if identifier else values['name'], values ) def create_or_update_action(action_spec, definition, scope): action = db_api.load_action_definition(action_spec.get_name()) if action and action.is_system: raise exc.InvalidActionException( "Attempt to modify a system action: %s" % action.name ) values = _get_action_values(action_spec, definition, scope) return db_api.create_or_update_action_definition(values['name'], values) def get_input_list(action_input): input_list = [] for param in action_input: if isinstance(param, dict): for k, v in param.items(): input_list.append("%s=%s" % (k, json.dumps(v))) else: input_list.append(param) return input_list def _get_action_values(action_spec, definition, scope): action_input = action_spec.to_dict().get('input', []) input_list = get_input_list(action_input) values = { 'name': action_spec.get_name(), 'description': action_spec.get_description(), 'tags': action_spec.get_tags(), 'definition': definition, 'spec': action_spec.to_dict(), 'is_system': False, 'input': ", ".join(input_list) if input_list else None, 'scope': scope } return values mistral-6.0.0/mistral/services/security.py0000666000175100017510000000602213245513261020737 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from mistral import context as auth_ctx from mistral.utils.openstack import keystone LOG = logging.getLogger(__name__) CONF = cfg.CONF # Make sure to import 'auth_enable' option before using it. # TODO(rakhmerov): Try to find a better solution. CONF.import_opt('auth_enable', 'mistral.config', group='pecan') DEFAULT_PROJECT_ID = "" def get_project_id(): if CONF.pecan.auth_enable and auth_ctx.has_ctx(): return auth_ctx.ctx().project_id else: return DEFAULT_PROJECT_ID def create_trust(): client = keystone.client() ctx = auth_ctx.ctx() trustee_id = keystone.client_for_admin().session.get_user_id() return client.trusts.create( trustor_user=client.user_id, trustee_user=trustee_id, impersonation=True, role_names=ctx.roles, project=ctx.project_id ) def create_context(trust_id, project_id): """Creates Mistral security context. :param trust_id: Trust Id. :param project_id: Project Id. :return: Mistral security context. """ if CONF.pecan.auth_enable: client = keystone.client_for_trusts(trust_id) if client.session: # Method get_token is deprecated, using get_auth_headers. token = client.session.get_auth_headers().get('X-Auth-Token') user_id = client.session.get_user_id() else: token = client.auth_token user_id = client.user_id return auth_ctx.MistralContext( user=user_id, tenant=project_id, auth_token=token, is_trust_scoped=True, trust_id=trust_id, ) return auth_ctx.MistralContext( user=None, tenant=None, auth_token=None, is_admin=True ) def delete_trust(trust_id=None): if not trust_id: # Try to retrieve trust from context. if auth_ctx.has_ctx(): trust_id = auth_ctx.ctx().trust_id if not trust_id: return keystone_client = keystone.client_for_trusts(trust_id) try: keystone_client.trusts.delete(trust_id) except Exception as e: LOG.warning("Failed to delete trust [id=%s]: %s", trust_id, e) def add_trust_id(secure_object_values): if cfg.CONF.pecan.auth_enable: trust = create_trust() secure_object_values.update({ 'trust_id': trust.id }) mistral-6.0.0/mistral/services/expiration_policy.py0000666000175100017510000001115413245513261022633 0ustar zuulzuul00000000000000# Copyright 2015 - Alcatel-lucent, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import traceback from oslo_config import cfg from oslo_log import log as logging from oslo_service import periodic_task from oslo_service import threadgroup from mistral import context as auth_ctx from mistral.db.v2 import api as db_api LOG = logging.getLogger(__name__) CONF = cfg.CONF class ExecutionExpirationPolicy(periodic_task.PeriodicTasks): """Expiration Policy task. This task will run every 'evaluation_interval' and will remove old executions (expired execution). The time interval is configurable In the 'mistral.cfg' and also the expiration time (both in minutes). By default the interval set to 'None' so this task will be disabled. """ def __init__(self, conf): super(ExecutionExpirationPolicy, self).__init__(conf) interval = CONF.execution_expiration_policy.evaluation_interval ot = CONF.execution_expiration_policy.older_than mfe = CONF.execution_expiration_policy.max_finished_executions if interval and ((ot and ot >= 1) or (mfe and mfe >= 1)): _periodic_task = periodic_task.periodic_task( spacing=interval * 60, run_immediately=True ) self.add_periodic_task( _periodic_task(run_execution_expiration_policy) ) else: LOG.debug("Expiration policy disabled. Evaluation_interval " "is not configured or both older_than and " "max_finished_executions < '1'.") def _delete_executions(batch_size, expiration_time, max_finished_executions): _delete_until_depleted( lambda: db_api.get_expired_executions( expiration_time, batch_size ) ) _delete_until_depleted( lambda: db_api.get_superfluous_executions( max_finished_executions, batch_size ) ) def _delete_until_depleted(fetch_func): while True: with db_api.transaction(): execs = fetch_func() if not execs: break _delete(execs) def _delete(executions): for execution in executions: try: # Setup project_id for _secure_query delete execution. # TODO(tuan_luong): Manipulation with auth_ctx should be # out of db transaction scope. ctx = auth_ctx.MistralContext( user=None, tenant=execution.project_id, auth_token=None, is_admin=True ) auth_ctx.set_ctx(ctx) LOG.debug( 'Delete execution id : %s from date : %s ' 'according to expiration policy', execution.id, execution.updated_at ) db_api.delete_workflow_execution(execution.id) except Exception as e: msg = ("Failed to delete [execution_id=%s]\n %s" % (execution.id, traceback.format_exc(e))) LOG.warning(msg) finally: auth_ctx.set_ctx(None) def run_execution_expiration_policy(self, ctx): LOG.debug("Starting expiration policy.") older_than = CONF.execution_expiration_policy.older_than exp_time = (datetime.datetime.utcnow() - datetime.timedelta(minutes=older_than)) batch_size = CONF.execution_expiration_policy.batch_size max_executions = CONF.execution_expiration_policy.max_finished_executions # The default value of batch size is 0 # If it is not set, size of batch will be the size # of total number of expired executions. _delete_executions(batch_size, exp_time, max_executions) def setup(): tg = threadgroup.ThreadGroup() pt = ExecutionExpirationPolicy(CONF) ctx = auth_ctx.MistralContext( user=None, tenant=None, auth_token=None, is_admin=True ) tg.add_dynamic_timer( pt.run_periodic_tasks, initial_delay=None, periodic_interval_max=1, context=ctx ) return tg mistral-6.0.0/mistral/services/action_manager.py0000666000175100017510000001042113245513261022035 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2014 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from stevedore import extension from mistral.actions import action_factory from mistral.actions import generator_factory from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import actions from mistral import utils from mistral.utils import inspect_utils as i_utils # TODO(rakhmerov): Make methods more consistent and granular. LOG = logging.getLogger(__name__) ACTIONS_PATH = 'resources/actions' _ACTION_CTX_PARAM = 'action_context' # TODO(rakhmerov): It's confusing because we have std.xxx actions and actions # TODO(rakhmerov): under '../resources/actions' that we also call standard. def register_standard_actions(): action_paths = utils.get_file_list(ACTIONS_PATH) for action_path in action_paths: action_definition = open(action_path).read() actions.create_or_update_actions( action_definition, scope='public' ) def get_registered_actions(**kwargs): return db_api.get_action_definitions(**kwargs) def register_action_class(name, action_class_str, attributes, description=None, input_str=None): values = { 'name': name, 'action_class': action_class_str, 'attributes': attributes, 'description': description, 'input': input_str, 'is_system': True, 'scope': 'public' } try: LOG.debug("Registering action in DB: %s", name) db_api.create_action_definition(values) except exc.DBDuplicateEntryError: LOG.debug("Action %s already exists in DB.", name) def _clear_system_action_db(): db_api.delete_action_definitions(is_system=True) def sync_db(): with db_api.transaction(): _clear_system_action_db() register_action_classes() register_standard_actions() def _register_dynamic_action_classes(): for generator in generator_factory.all_generators(): actions = generator.create_actions() module = generator.base_action_class.__module__ class_name = generator.base_action_class.__name__ action_class_str = "%s.%s" % (module, class_name) for action in actions: attrs = i_utils.get_public_fields(action['class']) register_action_class( action['name'], action_class_str, attrs, action['description'], action['arg_list'] ) def register_action_classes(): mgr = extension.ExtensionManager( namespace='mistral.actions', invoke_on_load=False ) for name in mgr.names(): action_class_str = mgr[name].entry_point_target.replace(':', '.') action_class = mgr[name].plugin description = i_utils.get_docstring(action_class) input_str = i_utils.get_arg_list_as_str(action_class.__init__) attrs = i_utils.get_public_fields(mgr[name].plugin) register_action_class( name, action_class_str, attrs, description=description, input_str=input_str ) _register_dynamic_action_classes() def get_action_db(action_name): return db_api.load_action_definition(action_name) def get_action_class(action_full_name): """Finds action class by full action name (i.e. 'namespace.action_name'). :param action_full_name: Full action name (that includes namespace). :return: Action class or None if not found. """ action_db = get_action_db(action_full_name) if action_db: return action_factory.construct_action_class( action_db.action_class, action_db.attributes ) mistral-6.0.0/mistral/services/scheduler.py0000666000175100017510000002543313245513261021055 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import eventlet import random import sys import threading from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils from mistral import context from mistral.db import utils as db_utils from mistral.db.v2 import api as db_api from mistral import exceptions as exc LOG = logging.getLogger(__name__) CONF = cfg.CONF # All schedulers. _schedulers = set() def schedule_call(factory_method_path, target_method_name, run_after, serializers=None, key=None, **method_args): """Schedules call and lately invokes target_method. Add this call specification to DB, and then after run_after seconds service CallScheduler invokes the target_method. :param factory_method_path: Full python-specific path to factory method that creates a target object that the call will be made against. :param target_method_name: Name of a method which will be invoked. :param run_after: Value in seconds. :param serializers: map of argument names and their serializer class paths. Use when an argument is an object of specific type, and needs to be serialized. Example: { "result": "mistral.utils.serializer.ResultSerializer"} Serializer for the object type must implement serializer interface in mistral/utils/serializer.py :param key: Key which can potentially be used for squashing similar delayed calls. :param method_args: Target method keyword arguments. """ ctx_serializer = context.RpcContextSerializer() ctx = ( ctx_serializer.serialize_context(context.ctx()) if context.has_ctx() else {} ) execution_time = (datetime.datetime.now() + datetime.timedelta(seconds=run_after)) if serializers: for arg_name, serializer_path in serializers.items(): if arg_name not in method_args: raise exc.MistralException( "Serializable method argument %s" " not found in method_args=%s" % (arg_name, method_args)) try: serializer = importutils.import_class(serializer_path)() except ImportError as e: raise ImportError( "Cannot import class %s: %s" % (serializer_path, e) ) method_args[arg_name] = serializer.serialize(method_args[arg_name]) values = { 'factory_method_path': factory_method_path, 'target_method_name': target_method_name, 'execution_time': execution_time, 'auth_context': ctx, 'serializers': serializers, 'key': key, 'method_arguments': method_args, 'processing': False } db_api.create_delayed_call(values) class Scheduler(object): def __init__(self, fixed_delay, random_delay, batch_size): self._stopped = False self._thread = threading.Thread(target=self._loop) self._thread.daemon = True self._fixed_delay = fixed_delay self._random_delay = random_delay self._batch_size = batch_size def start(self): self._thread.start() def stop(self, graceful=False): self._stopped = True if graceful: self._thread.join() def _loop(self): while not self._stopped: LOG.debug("Starting Scheduler loop [scheduler=%s]...", self) try: self._process_delayed_calls() except Exception: LOG.exception( "Scheduler failed to process delayed calls" " due to unexpected exception." ) # For some mysterious reason (probably eventlet related) # the exception is not cleared from the context automatically. # This results in subsequent log.warning calls to show invalid # info. if sys.version_info < (3,): sys.exc_clear() eventlet.sleep( self._fixed_delay + random.Random().randint(0, self._random_delay * 1000) * 0.001 ) def _process_delayed_calls(self, ctx=None): """Run delayed required calls. This algorithm should work with transactions having at least 'READ-COMMITTED' isolation mode. :param ctx: Auth context. """ # Select and capture calls matching time criteria. db_calls = self._capture_calls(self._batch_size) if not db_calls: return # Determine target methods, deserialize arguments etc. prepared_calls = self._prepare_calls(db_calls) # Invoke prepared calls. self._invoke_calls(prepared_calls) # Delete invoked calls from DB. self.delete_calls(db_calls) @staticmethod @db_utils.retry_on_db_error def _capture_calls(batch_size): """Captures delayed calls eligible for processing (based on time). The intention of this method is to select delayed calls based on time criteria and mark them in DB as being processed so that no other threads could process them in parallel. :return: A list of delayed calls captured for further processing. """ result = [] time_filter = datetime.datetime.now() + datetime.timedelta(seconds=1) with db_api.transaction(): candidates = db_api.get_delayed_calls_to_start( time_filter, batch_size ) for call in candidates: # Mark this delayed call has been processed in order to # prevent calling from parallel transaction. db_call, updated_cnt = db_api.update_delayed_call( id=call.id, values={'processing': True}, query_filter={'processing': False} ) # If updated_cnt != 1 then another scheduler # has already updated it. if updated_cnt == 1: result.append(db_call) LOG.debug("Scheduler captured %s delayed calls.", len(result)) return result @staticmethod def _prepare_calls(raw_calls): """Prepares delayed calls for invocation. After delayed calls were selected from DB they still need to be prepared for further usage, we need to build final target methods and deserialize arguments, if needed. :param raw_calls: Delayed calls fetched from DB (DB models). :return: A list of tuples (target_auth_context, target_method, method_args) where all data is properly deserialized. """ result = [] for call in raw_calls: LOG.debug( 'Preparing next delayed call. ' '[ID=%s, factory_method_path=%s, target_method_name=%s, ' 'method_arguments=%s]', call.id, call.factory_method_path, call.target_method_name, call.method_arguments ) target_auth_context = copy.deepcopy(call.auth_context) if call.factory_method_path: factory = importutils.import_class(call.factory_method_path) target_method = getattr(factory(), call.target_method_name) else: target_method = importutils.import_class( call.target_method_name ) method_args = copy.deepcopy(call.method_arguments) if call.serializers: # Deserialize arguments. for arg_name, ser_path in call.serializers.items(): serializer = importutils.import_class(ser_path)() deserialized = serializer.deserialize( method_args[arg_name] ) method_args[arg_name] = deserialized result.append((target_auth_context, target_method, method_args)) return result @staticmethod def _invoke_calls(delayed_calls): """Invokes prepared delayed calls. :param delayed_calls: Prepared delayed calls represented as tuples (target_auth_context, target_method, method_args). """ ctx_serializer = context.RpcContextSerializer() for (target_auth_context, target_method, method_args) in delayed_calls: try: # Set the correct context for the method. ctx_serializer.deserialize_context(target_auth_context) # Invoke the method. target_method(**method_args) except Exception as e: LOG.exception( "Delayed call failed, method: %s, exception: %s", target_method, e ) finally: # Remove context. context.set_ctx(None) @staticmethod @db_utils.retry_on_db_error def delete_calls(db_calls): """Deletes delayed calls. :param db_calls: Delayed calls to delete from DB. """ with db_api.transaction(): for call in db_calls: try: db_api.delete_delayed_call(call.id) except Exception as e: LOG.error( "Failed to delete delayed call [call=%s, " "exception=%s]", call, e ) # We have to re-raise any exception because the transaction # would be already invalid anyway. If it's a deadlock then # it will be handled. raise e LOG.debug("Scheduler deleted %s delayed calls.", len(db_calls)) def start(): sched = Scheduler( CONF.scheduler.fixed_delay, CONF.scheduler.random_delay, CONF.scheduler.batch_size ) _schedulers.add(sched) sched.start() return sched def stop_scheduler(sched, graceful=False): if not sched: return sched.stop(graceful) _schedulers.remove(sched) def stop_all_schedulers(): for sched in _schedulers: sched.stop(graceful=True) _schedulers.clear() mistral-6.0.0/mistral/services/__init__.py0000666000175100017510000000000013245513261020615 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/services/workbooks.py0000666000175100017510000001002213245513261021103 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api_v2 from mistral.lang import parser as spec_parser from mistral.services import actions def create_workbook_v2(definition, scope='private'): wb_spec = spec_parser.get_workbook_spec_from_yaml(definition) wb_values = _get_workbook_values( wb_spec, definition, scope ) with db_api_v2.transaction(): wb_db = db_api_v2.create_workbook(wb_values) _on_workbook_update(wb_db, wb_spec) return wb_db def update_workbook_v2(definition, scope='private'): wb_spec = spec_parser.get_workbook_spec_from_yaml(definition) values = _get_workbook_values(wb_spec, definition, scope) with db_api_v2.transaction(): wb_db = db_api_v2.update_workbook(values['name'], values) _, db_wfs = _on_workbook_update(wb_db, wb_spec) return wb_db def _on_workbook_update(wb_db, wb_spec): db_actions = _create_or_update_actions(wb_db, wb_spec.get_actions()) db_wfs = _create_or_update_workflows(wb_db, wb_spec.get_workflows()) return db_actions, db_wfs def _create_or_update_actions(wb_db, actions_spec): db_actions = [] if actions_spec: for action_spec in actions_spec: action_name = '%s.%s' % (wb_db.name, action_spec.get_name()) input_list = actions.get_input_list( action_spec.to_dict().get('input', []) ) values = { 'name': action_name, 'spec': action_spec.to_dict(), 'tags': action_spec.get_tags(), 'definition': _get_action_definition(wb_db, action_spec), 'description': action_spec.get_description(), 'is_system': False, 'input': ', '.join(input_list) if input_list else None, 'scope': wb_db.scope, 'project_id': wb_db.project_id } db_actions.append( db_api_v2.create_or_update_action_definition( action_name, values ) ) return db_actions def _create_or_update_workflows(wb_db, workflows_spec): db_wfs = [] if workflows_spec: for wf_spec in workflows_spec: wf_name = '%s.%s' % (wb_db.name, wf_spec.get_name()) values = { 'name': wf_name, 'definition': _get_wf_definition(wb_db, wf_spec), 'spec': wf_spec.to_dict(), 'scope': wb_db.scope, 'project_id': wb_db.project_id, 'namespace': '', 'tags': wf_spec.get_tags(), 'is_system': False } db_wfs.append( db_api_v2.create_or_update_workflow_definition(wf_name, values) ) return db_wfs def _get_workbook_values(wb_spec, definition, scope): values = { 'name': wb_spec.get_name(), 'tags': wb_spec.get_tags(), 'definition': definition, 'spec': wb_spec.to_dict(), 'scope': scope, 'is_system': False } return values def _get_wf_definition(wb_db, wf_spec): wf_definition = spec_parser.get_workflow_definition( wb_db.definition, wf_spec.get_name() ) return wf_definition def _get_action_definition(wb_db, action_spec): action_definition = spec_parser.get_action_definition( wb_db.definition, action_spec.get_name() ) return action_definition mistral-6.0.0/mistral/services/periodic.py0000666000175100017510000001265213245513261020674 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from oslo_config import cfg from oslo_log import log as logging from oslo_service import periodic_task from oslo_service import threadgroup from mistral import context as auth_ctx from mistral.db.v2 import api as db_api_v2 from mistral import exceptions as exc from mistral.rpc import clients as rpc from mistral.services import security from mistral.services import triggers LOG = logging.getLogger(__name__) CONF = cfg.CONF # {periodic_task: thread_group} _periodic_tasks = {} def process_cron_triggers_v2(self, ctx): LOG.debug("Processing cron triggers...") for trigger in triggers.get_next_cron_triggers(): LOG.debug("Processing cron trigger: %s", trigger) try: # Setup admin context before schedule triggers. ctx = security.create_context( trigger.trust_id, trigger.project_id ) auth_ctx.set_ctx(ctx) LOG.debug("Cron trigger security context: %s", ctx) # Try to advance the cron trigger next_execution_time and # remaining_executions if relevant. modified = advance_cron_trigger(trigger) # If cron trigger was not already modified by another engine. if modified: LOG.debug( "Starting workflow '%s' by cron trigger '%s'", trigger.workflow.name, trigger.name ) description = { "description": ( "Workflow execution created by cron" " trigger '(%s)'." % trigger.id ), "triggered_by": { "type": "cron_trigger", "id": trigger.id, "name": trigger.name, } } rpc.get_engine_client().start_workflow( trigger.workflow.name, trigger.workflow.namespace, None, trigger.workflow_input, description=json.dumps(description), **trigger.workflow_params ) except Exception: # Log and continue to next cron trigger. LOG.exception( "Failed to process cron trigger %s", str(trigger) ) finally: auth_ctx.set_ctx(None) class MistralPeriodicTasks(periodic_task.PeriodicTasks): def __init__(self, conf): super(MistralPeriodicTasks, self).__init__(conf) periodic_task_ = periodic_task.periodic_task( spacing=CONF.cron_trigger.execution_interval, run_immediately=True, ) self.add_periodic_task(periodic_task_(process_cron_triggers_v2)) def advance_cron_trigger(t): modified_count = 0 try: # If the cron trigger is defined with limited execution count. if t.remaining_executions is not None and t.remaining_executions > 0: t.remaining_executions -= 1 # If this is the last execution. if t.remaining_executions == 0: modified_count = triggers.delete_cron_trigger( t.name, trust_id=t.trust_id, delete_trust=False ) else: # if remaining execution = None or > 0. next_time = triggers.get_next_execution_time( t.pattern, t.next_execution_time ) # Update the cron trigger with next execution details # only if it wasn't already updated by a different process. updated, modified_count = db_api_v2.update_cron_trigger( t.name, { 'next_execution_time': next_time, 'remaining_executions': t.remaining_executions }, query_filter={ 'next_execution_time': t.next_execution_time } ) except exc.DBEntityNotFoundError as e: # Cron trigger was probably already deleted by a different process. LOG.debug( "Cron trigger named '%s' does not exist anymore: %s", t.name, str(e) ) # Return True if this engine was able to modify the cron trigger in DB. return modified_count > 0 def setup(): tg = threadgroup.ThreadGroup() pt = MistralPeriodicTasks(CONF) ctx = auth_ctx.MistralContext( user=None, tenant=None, auth_token=None, is_admin=True ) tg.add_dynamic_timer( pt.run_periodic_tasks, initial_delay=None, periodic_interval_max=1, context=ctx ) _periodic_tasks[pt] = tg return tg def stop_all_periodic_tasks(): for tg in _periodic_tasks.values(): tg.stop() _periodic_tasks.clear() mistral-6.0.0/mistral/services/workflows.py0000666000175100017510000001144413245513261021131 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral import utils from mistral.workflow import data_flow from mistral.workflow import states from oslo_log import log as logging STD_WF_PATH = 'resources/workflows' LOG = logging.getLogger(__name__) def register_standard_workflows(run_in_tx=True): LOG.debug("Registering standard workflows") workflow_paths = utils.get_file_list(STD_WF_PATH) for wf_path in workflow_paths: workflow_definition = open(wf_path).read() create_workflows( workflow_definition, scope='public', is_system=True, run_in_tx=run_in_tx, namespace='' ) def _clear_system_workflow_db(): db_api.delete_workflow_definitions(is_system=True) def sync_db(): LOG.debug("Syncing db") with db_api.transaction(): _clear_system_workflow_db() register_standard_workflows(run_in_tx=False) def create_workflows(definition, scope='private', is_system=False, run_in_tx=True, namespace=''): LOG.debug("Creating workflows") wf_list_spec = spec_parser.get_workflow_list_spec_from_yaml(definition) db_wfs = [] if run_in_tx: with db_api.transaction(): _append_all_workflows( definition, is_system, scope, namespace, wf_list_spec, db_wfs ) else: _append_all_workflows( definition, is_system, scope, namespace, wf_list_spec, db_wfs ) return db_wfs def _append_all_workflows(definition, is_system, scope, namespace, wf_list_spec, db_wfs): for wf_spec in wf_list_spec.get_workflows(): db_wfs.append( _create_workflow( wf_spec, definition, scope, namespace, is_system ) ) def update_workflows(definition, scope='private', identifier=None, namespace=''): LOG.debug("Updating workflows") wf_list_spec = spec_parser.get_workflow_list_spec_from_yaml(definition) wfs = wf_list_spec.get_workflows() if identifier and len(wfs) > 1: raise exc.InputException( "More than one workflows are not supported for " "update with identifier. [identifier: %s]" % identifier ) db_wfs = [] with db_api.transaction(): for wf_spec in wf_list_spec.get_workflows(): db_wfs.append(_update_workflow( wf_spec, definition, scope, namespace=namespace, identifier=identifier )) return db_wfs def update_workflow_execution_env(wf_ex, env): if not env: return wf_ex if wf_ex.state not in [states.IDLE, states.PAUSED, states.ERROR]: raise exc.NotAllowedException( 'Updating env to workflow execution is only permitted if ' 'it is in IDLE, PAUSED, or ERROR state.' ) wf_ex.params['env'] = utils.merge_dicts(wf_ex.params['env'], env) data_flow.add_environment_to_context(wf_ex) return wf_ex def _get_workflow_values(wf_spec, definition, scope, namespace=None, is_system=False): values = { 'name': wf_spec.get_name(), 'tags': wf_spec.get_tags(), 'definition': definition, 'spec': wf_spec.to_dict(), 'scope': scope, 'namespace': namespace, 'is_system': is_system } return values def _create_workflow(wf_spec, definition, scope, namespace, is_system): return db_api.create_workflow_definition( _get_workflow_values(wf_spec, definition, scope, namespace, is_system) ) def _update_workflow(wf_spec, definition, scope, identifier=None, namespace=''): values = _get_workflow_values(wf_spec, definition, scope, namespace) return db_api.update_workflow_definition( identifier if identifier else values['name'], values, namespace=namespace ) mistral-6.0.0/mistral/services/triggers.py0000666000175100017510000001677113245513261020732 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import croniter import datetime import json import six from oslo_log import log as logging from mistral.db.v2 import api as db_api from mistral.engine import utils as eng_utils from mistral import exceptions as exc from mistral.lang import parser from mistral.rpc import clients as rpc from mistral.services import security LOG = logging.getLogger(__name__) def get_next_execution_time(pattern, start_time): return croniter.croniter(pattern, start_time).get_next( datetime.datetime ) # Triggers v2. def get_next_cron_triggers(): return db_api.get_next_cron_triggers( datetime.datetime.utcnow() + datetime.timedelta(0, 2) ) def validate_cron_trigger_input(pattern, first_time, count): if not (first_time or pattern): raise exc.InvalidModelException( 'Pattern or first_execution_time must be specified.' ) if first_time: valid_min_time = datetime.datetime.utcnow() + datetime.timedelta(0, 60) if valid_min_time > first_time: raise exc.InvalidModelException( 'first_execution_time must be at least 1 minute in the future.' ) if not pattern and count and count > 1: raise exc.InvalidModelException( 'Pattern must be provided if count is superior to 1.' ) if pattern: try: croniter.croniter(pattern) except (ValueError, KeyError): raise exc.InvalidModelException( 'The specified pattern is not valid: {}'.format(pattern) ) def create_cron_trigger(name, workflow_name, workflow_input, workflow_params=None, pattern=None, first_time=None, count=None, start_time=None, workflow_id=None): if not start_time: start_time = datetime.datetime.utcnow() if isinstance(first_time, six.string_types): try: first_time = datetime.datetime.strptime( first_time, '%Y-%m-%d %H:%M' ) except ValueError as e: raise exc.InvalidModelException(str(e)) validate_cron_trigger_input(pattern, first_time, count) if first_time: next_time = first_time if not (pattern or count): count = 1 else: next_time = get_next_execution_time(pattern, start_time) with db_api.transaction(): wf_def = db_api.get_workflow_definition( workflow_id if workflow_id else workflow_name ) wf_spec = parser.get_workflow_spec_by_definition_id( wf_def.id, wf_def.updated_at ) # TODO(rakhmerov): Use Workflow object here instead of utils. eng_utils.validate_input( wf_spec.get_input(), workflow_input, wf_spec.get_name(), wf_spec.__class__.__name__ ) trigger_parameters = { 'name': name, 'pattern': pattern, 'first_execution_time': first_time, 'next_execution_time': next_time, 'remaining_executions': count, 'workflow_name': wf_def.name, 'workflow_id': wf_def.id, 'workflow_input': workflow_input or {}, 'workflow_params': workflow_params or {}, 'scope': 'private' } security.add_trust_id(trigger_parameters) try: trig = db_api.create_cron_trigger(trigger_parameters) except Exception: # Delete trust before raising exception. security.delete_trust(trigger_parameters.get('trust_id')) raise return trig def delete_cron_trigger(identifier, trust_id=None, delete_trust=True): if not trust_id: trigger = db_api.get_cron_trigger(identifier) trust_id = trigger.trust_id modified_count = db_api.delete_cron_trigger(identifier) if modified_count and delete_trust: # Delete trust only together with deleting trigger. security.delete_trust(trust_id) return modified_count def create_event_trigger(name, exchange, topic, event, workflow_id, scope='private', workflow_input=None, workflow_params=None): with db_api.transaction(): wf_def = db_api.get_workflow_definition_by_id(workflow_id) wf_spec = parser.get_workflow_spec_by_definition_id( wf_def.id, wf_def.updated_at ) # TODO(rakhmerov): Use Workflow object here instead of utils. eng_utils.validate_input( wf_spec.get_input(), workflow_input, wf_spec.get_name(), wf_spec.__class__.__name__ ) values = { 'name': name, 'workflow_id': workflow_id, 'workflow_input': workflow_input or {}, 'workflow_params': workflow_params or {}, 'exchange': exchange, 'topic': topic, 'event': event, 'scope': scope, } security.add_trust_id(values) trig = db_api.create_event_trigger(values) trigs = db_api.get_event_triggers(insecure=True, exchange=exchange, topic=topic) events = [t.event for t in trigs] # NOTE(kong): Send RPC message within the db transaction, rollback if # any error occurs. trig_dict = trig.to_dict() trig_dict['workflow_namespace'] = wf_def.namespace rpc.get_event_engine_client().create_event_trigger( trig_dict, events ) return trig def delete_event_trigger(event_trigger): db_api.delete_event_trigger(event_trigger['id']) trigs = db_api.get_event_triggers( insecure=True, exchange=event_trigger['exchange'], topic=event_trigger['topic'] ) events = set([t.event for t in trigs]) # NOTE(kong): Send RPC message within the db transaction, rollback if # any error occurs. rpc.get_event_engine_client().delete_event_trigger( event_trigger, list(events) ) def update_event_trigger(id, values): trig = db_api.update_event_trigger(id, values) # NOTE(kong): Send RPC message within the db transaction, rollback if # any error occurs. rpc.get_event_engine_client().update_event_trigger(trig.to_dict()) return trig def on_workflow_complete(wf_ex): if wf_ex.task_execution_id: return try: description = json.loads(wf_ex.description) except ValueError as e: LOG.debug(str(e)) return if not isinstance(description, dict): return triggered = description.get('triggered_by') if not triggered: return if triggered['type'] == 'cron_trigger': if not db_api.load_cron_trigger(triggered['name']): security.delete_trust() elif triggered['type'] == 'event_trigger': if not db_api.load_event_trigger(triggered['id'], True): security.delete_trust() mistral-6.0.0/mistral/auth/0000775000175100017510000000000013245513604015633 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/auth/keycloak.py0000666000175100017510000001052213245513261020010 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import jwt from oslo_config import cfg from oslo_log import log as logging import pprint import requests from six.moves import urllib from mistral._i18n import _ from mistral import auth from mistral import exceptions as exc LOG = logging.getLogger(__name__) CONF = cfg.CONF class KeycloakAuthHandler(auth.AuthHandler): def authenticate(self, req): certfile = CONF.keycloak_oidc.certfile keyfile = CONF.keycloak_oidc.keyfile cafile = CONF.keycloak_oidc.cafile or self.get_system_ca_file() insecure = CONF.keycloak_oidc.insecure if 'X-Auth-Token' not in req.headers: msg = _("Auth token must be provided in 'X-Auth-Token' header.") LOG.error(msg) raise exc.UnauthorizedException(message=msg) access_token = req.headers.get('X-Auth-Token') try: decoded = jwt.decode(access_token, algorithms=['RS256'], verify=False) except Exception: msg = _("Token can't be decoded because of wrong format.") LOG.error(msg) raise exc.UnauthorizedException(message=msg) # Get user realm from parsed token # Format is "iss": "http://:/auth/realms/", __, __, realm_name = decoded['iss'].strip().rpartition('/realms/') # Get roles from from parsed token roles = ','.join(decoded['realm_access']['roles']) \ if 'realm_access' in decoded else '' # NOTE(rakhmerov): There's a special endpoint for introspecting # access tokens described in OpenID Connect specification but it's # available in KeyCloak starting only with version 1.8.Final so we have # to use user info endpoint which also takes exactly one parameter # (access token) and replies with error if token is invalid. user_info_endpoint = ( "%s/realms/%s/protocol/openid-connect/userinfo" % (CONF.keycloak_oidc.auth_url, realm_name) ) verify = None if urllib.parse.urlparse(user_info_endpoint).scheme == "https": verify = False if insecure else cafile cert = (certfile, keyfile) if certfile and keyfile else None try: resp = requests.get( user_info_endpoint, headers={"Authorization": "Bearer %s" % access_token}, verify=verify, cert=cert ) except requests.ConnectionError: msg = _("Can't connect to keycloak server with address '%s'." ) % CONF.keycloak_oidc.auth_url LOG.error(msg) raise exc.MistralException(message=msg) resp.raise_for_status() LOG.debug( "HTTP response from OIDC provider: %s", pprint.pformat(resp.json()) ) req.headers["X-Identity-Status"] = "Confirmed" req.headers["X-Project-Id"] = realm_name req.headers["X-Roles"] = roles @staticmethod def get_system_ca_file(): """Return path to system default CA file.""" # Standard CA file locations for Debian/Ubuntu, RedHat/Fedora, # Suse, FreeBSD/OpenBSD, MacOSX, and the bundled ca ca_path = ['/etc/ssl/certs/ca-certificates.crt', '/etc/pki/tls/certs/ca-bundle.crt', '/etc/ssl/ca-bundle.pem', '/etc/ssl/cert.pem', '/System/Library/OpenSSL/certs/cacert.pem', requests.certs.where()] for ca in ca_path: LOG.debug("Looking for ca file %s", ca) if os.path.exists(ca): LOG.debug("Using ca file %s", ca) return ca LOG.warning("System ca file could not be found.") mistral-6.0.0/mistral/auth/keystone.py0000666000175100017510000000273513245513261020056 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral import auth from mistral import exceptions as exc CONF = cfg.CONF class KeystoneAuthHandler(auth.AuthHandler): def authenticate(self, req): # Note(nmakhotkin): Since we have deferred authentication, # need to check for auth manually (check for corresponding # headers according to keystonemiddleware docs. identity_status = req.headers.get('X-Identity-Status') service_identity_status = req.headers.get('X-Service-Identity-Status') if (identity_status == 'Confirmed' or service_identity_status == 'Confirmed'): return if req.headers.get('X-Auth-Token'): msg = 'Auth token is invalid: %s' % req.headers['X-Auth-Token'] else: msg = 'Authentication required' raise exc.UnauthorizedException(msg) mistral-6.0.0/mistral/auth/__init__.py0000666000175100017510000000245313245513261017751 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc from oslo_config import cfg import six from stevedore import driver from mistral import exceptions as exc _IMPL_AUTH_HANDLER = None def get_auth_handler(): auth_type = cfg.CONF.auth_type global _IMPL_AUTH_HANDLER if not _IMPL_AUTH_HANDLER: mgr = driver.DriverManager( 'mistral.auth', auth_type, invoke_on_load=True ) _IMPL_AUTH_HANDLER = mgr.driver return _IMPL_AUTH_HANDLER @six.add_metaclass(abc.ABCMeta) class AuthHandler(object): """Abstract base class for an authentication plugin.""" @abc.abstractmethod def authenticate(self, req): raise exc.UnauthorizedException() mistral-6.0.0/mistral/expressions/0000775000175100017510000000000013245513604017254 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/expressions/base_expression.py0000666000175100017510000000324213245513261023021 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc class Evaluator(object): """Expression evaluator interface. Having this interface gives the flexibility to change the actual expression language used in Mistral DSL for conditions, output calculation etc. """ @classmethod @abc.abstractmethod def validate(cls, expression): """Parse and validates the expression. :param expression: Expression string :return: True if expression is valid """ pass @classmethod @abc.abstractmethod def evaluate(cls, expression, context): """Evaluates the expression against the given data context. :param expression: Expression string :param context: Data context :return: Expression result """ pass @classmethod @abc.abstractmethod def is_expression(cls, expression): """Check expression string and decide whether it is expression or not. :param expression: Expression string :return: True if string is expression """ pass mistral-6.0.0/mistral/expressions/yaql_expression.py0000666000175100017510000001143413245513261023057 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import re from oslo_db import exception as db_exc from oslo_log import log as logging import six from yaql.language import exceptions as yaql_exc from yaql.language import factory from mistral import exceptions as exc from mistral.expressions.base_expression import Evaluator from mistral.utils import expression_utils LOG = logging.getLogger(__name__) YAQL_ENGINE = factory.YaqlFactory().create() INLINE_YAQL_REGEXP = '<%.*?%>' class YAQLEvaluator(Evaluator): @classmethod def validate(cls, expression): try: YAQL_ENGINE(expression) except (yaql_exc.YaqlException, KeyError, ValueError, TypeError) as e: raise exc.YaqlGrammarException(getattr(e, 'message', e)) @classmethod def evaluate(cls, expression, data_context): expression = expression.strip() if expression else expression try: result = YAQL_ENGINE(expression).evaluate( context=expression_utils.get_yaql_context(data_context) ) except Exception as e: # NOTE(rakhmerov): if we hit a database error then we need to # re-raise the initial exception so that upper layers had a # chance to handle it properly (e.g. in case of DB deadlock # the operations needs to retry. Essentially, such situation # indicates a problem with DB rather than with the expression # syntax or values. if isinstance(e, db_exc.DBError): LOG.error( "Failed to evaluate YAQL expression due to a database" " error, re-raising initial exception [expression=%s," " error=%s, data=%s]", expression, str(e), data_context ) raise e raise exc.YaqlEvaluationException( "Can not evaluate YAQL expression [expression=%s, error=%s" ", data=%s]" % (expression, str(e), data_context) ) return result if not inspect.isgenerator(result) else list(result) @classmethod def is_expression(cls, s): # The class should not be used outside of InlineYAQLEvaluator since by # convention, YAQL expression should always be wrapped in '<% %>'. return False class InlineYAQLEvaluator(YAQLEvaluator): # This regular expression will look for multiple occurrences of YAQL # expressions in '<% %>' (i.e. <% any_symbols %>) within a string. find_expression_pattern = re.compile(INLINE_YAQL_REGEXP) @classmethod def validate(cls, expression): if not isinstance(expression, six.string_types): raise exc.YaqlEvaluationException( "Unsupported type '%s'." % type(expression) ) found_expressions = cls.find_inline_expressions(expression) if found_expressions: [super(InlineYAQLEvaluator, cls).validate(expr.strip("<%>")) for expr in found_expressions] @classmethod def evaluate(cls, expression, data_context): LOG.debug( "Start to evaluate YAQL expression. " "[expression='%s', context=%s]", expression, data_context ) result = expression found_expressions = cls.find_inline_expressions(expression) if found_expressions: for expr in found_expressions: trim_expr = expr.strip("<%>") evaluated = super(InlineYAQLEvaluator, cls).evaluate(trim_expr, data_context) if len(expression) == len(expr): result = evaluated else: result = result.replace(expr, str(evaluated)) LOG.debug( "Finished evaluation. [expression='%s', result: %s]", expression, result ) return result @classmethod def is_expression(cls, s): return cls.find_expression_pattern.search(s) @classmethod def find_inline_expressions(cls, s): return cls.find_expression_pattern.findall(s) mistral-6.0.0/mistral/expressions/__init__.py0000666000175100017510000000621113245513261021366 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from oslo_log import log as logging import six from stevedore import extension from mistral import exceptions as exc LOG = logging.getLogger(__name__) _mgr = extension.ExtensionManager( namespace='mistral.expression.evaluators', invoke_on_load=False ) _evaluators = [] patterns = {} for name in sorted(_mgr.names()): evaluator = _mgr[name].plugin _evaluators.append((name, evaluator)) patterns[name] = evaluator.find_expression_pattern.pattern def validate(expression): LOG.debug("Validating expression [expression='%s']", expression) if not isinstance(expression, six.string_types): return expression_found = None for name, evaluator in _evaluators: if evaluator.is_expression(expression): if expression_found: raise exc.ExpressionGrammarException( "The line already contains an expression of type '%s'. " "Mixing expression types in a single line is not allowed." % expression_found) try: evaluator.validate(expression) except Exception: raise else: expression_found = name def evaluate(expression, context): for name, evaluator in _evaluators: # Check if the passed value is expression so we don't need to do this # every time on a caller side. if (isinstance(expression, six.string_types) and evaluator.is_expression(expression)): return evaluator.evaluate(expression, context) return expression def _evaluate_item(item, context): if isinstance(item, six.string_types): try: return evaluate(item, context) except AttributeError as e: LOG.debug( "Expression %s is not evaluated, [context=%s]: %s", item, context, e ) return item else: return evaluate_recursively(item, context) def evaluate_recursively(data, context): data = copy.deepcopy(data) if not context: return data if isinstance(data, dict): for key in data: data[key] = _evaluate_item(data[key], context) elif isinstance(data, list): for index, item in enumerate(data): data[index] = _evaluate_item(item, context) elif isinstance(data, six.string_types): return _evaluate_item(data, context) return data mistral-6.0.0/mistral/expressions/jinja_expression.py0000666000175100017510000001233713245513261023207 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import re import jinja2 from jinja2 import parser as jinja_parse from jinja2.sandbox import SandboxedEnvironment from oslo_db import exception as db_exc from oslo_log import log as logging import six from mistral import exceptions as exc from mistral.expressions.base_expression import Evaluator from mistral.utils import expression_utils LOG = logging.getLogger(__name__) ANY_JINJA_REGEXP = "{{.*}}|{%.*%}" JINJA_REGEXP = '({{(.*?)}})' JINJA_BLOCK_REGEXP = '({%(.*?)%})' _environment = SandboxedEnvironment( undefined=jinja2.StrictUndefined, trim_blocks=True, lstrip_blocks=True ) _filters = expression_utils.get_custom_functions() for name in _filters: _environment.filters[name] = _filters[name] class JinjaEvaluator(Evaluator): _env = _environment.overlay() @classmethod def validate(cls, expression): if not isinstance(expression, six.string_types): raise exc.JinjaEvaluationException( "Unsupported type '%s'." % type(expression) ) try: parser = jinja_parse.Parser(cls._env, expression, state='variable') parser.parse_expression() except jinja2.exceptions.TemplateError as e: raise exc.JinjaGrammarException( "Syntax error '%s'." % str(e) ) @classmethod def evaluate(cls, expression, data_context): opts = {'undefined_to_none': False} ctx = expression_utils.get_jinja_context(data_context) try: result = cls._env.compile_expression(expression, **opts)(**ctx) # For StrictUndefined values, UndefinedError only gets raised when # the value is accessed, not when it gets created. The simplest way # to access it is to try and cast it to string. str(result) except Exception as e: # NOTE(rakhmerov): if we hit a database error then we need to # re-raise the initial exception so that upper layers had a # chance to handle it properly (e.g. in case of DB deadlock # the operations needs to retry. Essentially, such situation # indicates a problem with DB rather than with the expression # syntax or values. if isinstance(e, db_exc.DBError): LOG.error( "Failed to evaluate Jinja expression due to a database" " error, re-raising initial exception [expression=%s," " error=%s, data=%s]", expression, str(e), data_context ) raise e raise exc.JinjaEvaluationException( "Can not evaluate Jinja expression [expression=%s, error=%s" ", data=%s]" % (expression, str(e), data_context) ) return result @classmethod def is_expression(cls, s): # The class should only be called from within InlineJinjaEvaluator. The # return value prevents the class from being accidentally added as # Extension return False class InlineJinjaEvaluator(Evaluator): # The regular expression for Jinja variables and blocks find_expression_pattern = re.compile(JINJA_REGEXP) find_block_pattern = re.compile(JINJA_BLOCK_REGEXP) _env = _environment.overlay() @classmethod def validate(cls, expression): if not isinstance(expression, six.string_types): raise exc.JinjaEvaluationException( "Unsupported type '%s'." % type(expression) ) try: cls._env.parse(expression) except jinja2.exceptions.TemplateError as e: raise exc.JinjaGrammarException( "Syntax error '%s'." % str(e) ) @classmethod def evaluate(cls, expression, data_context): LOG.debug( "Start to evaluate Jinja expression. " "[expression='%s', context=%s]", expression, data_context ) patterns = cls.find_expression_pattern.findall(expression) if patterns[0][0] == expression: result = JinjaEvaluator.evaluate(patterns[0][1], data_context) else: ctx = expression_utils.get_jinja_context(data_context) result = cls._env.from_string(expression).render(**ctx) LOG.debug( "Finished evaluation. [expression='%s', result: %s]", expression, result ) return result @classmethod def is_expression(cls, s): return (cls.find_expression_pattern.search(s) or cls.find_block_pattern.search(s)) mistral-6.0.0/mistral/tests/0000775000175100017510000000000013245513604016034 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/0000775000175100017510000000000013245513604017013 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/actions/0000775000175100017510000000000013245513604020453 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/actions/test_std_http_action.py0000666000175100017510000001203513245513261025254 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import mock import requests from mistral.actions import std_actions as std from mistral.tests.unit import base from mistral_lib import actions as mistral_lib_actions URL = 'http://some_url' DATA = { 'server': { 'id': '12345', 'metadata': { 'name': 'super_server' } } } def get_fake_response(content, code, **kwargs): return base.FakeHTTPResponse( content, code, **kwargs ) def get_success_fake_response(): return get_fake_response( json.dumps(DATA), 200, headers={'Content-Type': 'application/json'} ) def get_error_fake_response(): return get_fake_response( json.dumps(DATA), 401 ) class HTTPActionTest(base.BaseTest): @mock.patch.object(requests, 'request') def test_http_action(self, mocked_method): mocked_method.return_value = get_success_fake_response() mock_ctx = mock.Mock() action = std.HTTPAction( url=URL, method='POST', body=DATA, timeout=20, allow_redirects=True ) DATA_STR = json.dumps(DATA) self.assertEqual(DATA_STR, action.body) self.assertEqual(URL, action.url) result = action.run(mock_ctx) self.assertIsInstance(result, dict) self.assertEqual(DATA, result['content']) self.assertIn('headers', result) self.assertEqual(200, result['status']) mocked_method.assert_called_with( 'POST', URL, data=DATA_STR, headers=None, cookies=None, params=None, timeout=20, auth=None, allow_redirects=True, proxies=None, verify=None ) @mock.patch.object(requests, 'request') def test_http_action_error_result(self, mocked_method): mocked_method.return_value = get_error_fake_response() mock_ctx = mock.Mock() action = std.HTTPAction( url=URL, method='POST', body=DATA, timeout=20, allow_redirects=True ) result = action.run(mock_ctx) self.assertIsInstance(result, mistral_lib_actions.Result) self.assertEqual(401, result.error['status']) @mock.patch.object(requests, 'request') def test_http_action_with_auth(self, mocked_method): mocked_method.return_value = get_success_fake_response() mock_ctx = mock.Mock() action = std.HTTPAction( url=URL, method='POST', auth='user:password' ) action.run(mock_ctx) args, kwargs = mocked_method.call_args self.assertEqual(('user', 'password'), kwargs['auth']) @mock.patch.object(requests, 'request') def test_http_action_with_headers(self, mocked_method): mocked_method.return_value = get_success_fake_response() mock_ctx = mock.Mock() headers = {'int_header': 33, 'bool_header': True, 'float_header': 3.0, 'regular_header': 'teststring'} safe_headers = {'int_header': '33', 'bool_header': 'True', 'float_header': '3.0', 'regular_header': 'teststring'} action = std.HTTPAction( url=URL, method='POST', headers=headers.copy(), ) result = action.run(mock_ctx) self.assertIn('headers', result) args, kwargs = mocked_method.call_args self.assertEqual(safe_headers, kwargs['headers']) @mock.patch.object(requests, 'request') def test_http_action_empty_resp(self, mocked_method): def invoke(content): action = std.HTTPAction( url=URL, method='GET', ) mocked_method.return_value = get_fake_response( content=content, code=200 ) result = action.run(mock.Mock()) self.assertEqual(content, result['content']) invoke(None) invoke('') @mock.patch.object(requests, 'request') def test_http_action_none_encoding_not_empty_resp(self, mocked_method): action = std.HTTPAction( url=URL, method='GET', ) mocked_method.return_value = get_fake_response( content='', code=200, encoding=None ) mock_ctx = mock.Mock() result = action.run(mock_ctx) self.assertIsNone(result['encoding']) mistral-6.0.0/mistral/tests/unit/actions/test_action_manager.py0000666000175100017510000000514213245513261025036 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.actions import std_actions as std from mistral.services import action_manager as a_m from mistral.tests.unit import base class ActionManagerTest(base.DbTestCase): def test_register_standard_actions(self): action_list = a_m.get_registered_actions() self._assert_single_item(action_list, name="std.echo") self._assert_single_item(action_list, name="std.email") self._assert_single_item(action_list, name="std.http") self._assert_single_item(action_list, name="std.mistral_http") self._assert_single_item(action_list, name="std.ssh") self._assert_single_item(action_list, name="std.javascript") self._assert_single_item(action_list, name="nova.servers_get") self._assert_single_item( action_list, name="nova.volumes_delete_server_volume" ) server_find_action = self._assert_single_item( action_list, name="nova.servers_find" ) self.assertIn('**', server_find_action.input) self._assert_single_item(action_list, name="keystone.users_list") self._assert_single_item(action_list, name="keystone.trusts_create") self._assert_single_item(action_list, name="glance.images_list") self._assert_single_item(action_list, name="glance.images_delete") def test_get_action_class(self): self.assertTrue( issubclass(a_m.get_action_class("std.echo"), std.EchoAction) ) self.assertTrue( issubclass(a_m.get_action_class("std.http"), std.HTTPAction) ) self.assertTrue( issubclass( a_m.get_action_class("std.mistral_http"), std.MistralHTTPAction ) ) self.assertTrue( issubclass(a_m.get_action_class("std.email"), std.SendEmailAction) ) self.assertTrue( issubclass( a_m.get_action_class("std.javascript"), std.JavaScriptAction ) ) mistral-6.0.0/mistral/tests/unit/actions/openstack/0000775000175100017510000000000013245513604022442 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/actions/openstack/test_openstack_actions.py0000666000175100017510000003110013245513261027556 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from mistral.actions.openstack import actions from oslo_config import cfg from oslotest import base CONF = cfg.CONF class FakeEndpoint(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) class OpenStackActionTest(base.BaseTestCase): def tearDown(self): super(OpenStackActionTest, self).tearDown() cfg.CONF.set_default('auth_enable', False, group='pecan') @mock.patch.object(actions.NovaAction, '_get_client') def test_nova_action(self, mocked): mock_ctx = mock.Mock() method_name = "servers.get" action_class = actions.NovaAction action_class.client_method_name = method_name params = {'server': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().servers.get.called) mocked().servers.get.assert_called_once_with(server="1234-abcd") @mock.patch.object(actions.GlanceAction, '_get_client') def test_glance_action(self, mocked): mock_ctx = mock.Mock() method_name = "images.delete" action_class = actions.GlanceAction action_class.client_method_name = method_name params = {'image': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().images.delete.called) mocked().images.delete.assert_called_once_with(image="1234-abcd") @mock.patch.object(actions.KeystoneAction, '_get_client') def test_keystone_action(self, mocked): mock_ctx = mock.Mock() method_name = "users.get" action_class = actions.KeystoneAction action_class.client_method_name = method_name params = {'user': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().users.get.called) mocked().users.get.assert_called_once_with(user="1234-abcd") @mock.patch.object(actions.HeatAction, '_get_client') def test_heat_action(self, mocked): mock_ctx = mock.Mock() method_name = "stacks.get" action_class = actions.HeatAction action_class.client_method_name = method_name params = {'id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().stacks.get.called) mocked().stacks.get.assert_called_once_with(id="1234-abcd") @mock.patch.object(actions.NeutronAction, '_get_client') def test_neutron_action(self, mocked): mock_ctx = mock.Mock() method_name = "show_network" action_class = actions.NeutronAction action_class.client_method_name = method_name params = {'id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().show_network.called) mocked().show_network.assert_called_once_with(id="1234-abcd") @mock.patch.object(actions.CinderAction, '_get_client') def test_cinder_action(self, mocked): mock_ctx = mock.Mock() method_name = "volumes.get" action_class = actions.CinderAction action_class.client_method_name = method_name params = {'volume': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().volumes.get.called) mocked().volumes.get.assert_called_once_with(volume="1234-abcd") @mock.patch.object(actions.TroveAction, '_get_client') def test_trove_action(self, mocked): mock_ctx = mock.Mock() method_name = "instances.get" action_class = actions.TroveAction action_class.client_method_name = method_name params = {'instance': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().instances.get.called) mocked().instances.get.assert_called_once_with(instance="1234-abcd") @mock.patch.object(actions.IronicAction, '_get_client') def test_ironic_action(self, mocked): mock_ctx = mock.Mock() method_name = "node.get" action_class = actions.IronicAction action_class.client_method_name = method_name params = {'node': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().node.get.called) mocked().node.get.assert_called_once_with(node="1234-abcd") @mock.patch.object(actions.BaremetalIntrospectionAction, '_get_client') def test_baremetal_introspector_action(self, mocked): mock_ctx = mock.Mock() method_name = "get_status" action_class = actions.BaremetalIntrospectionAction action_class.client_method_name = method_name params = {'uuid': '1234'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().get_status.called) mocked().get_status.assert_called_once_with(uuid="1234") @mock.patch.object(actions.MistralAction, '_get_client') def test_mistral_action(self, mocked): mock_ctx = mock.Mock() method_name = "workflows.get" action_class = actions.MistralAction action_class.client_method_name = method_name params = {'name': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().workflows.get.called) mocked().workflows.get.assert_called_once_with(name="1234-abcd") @mock.patch.object(actions.MistralAction, 'get_session_and_auth') def test_integrated_mistral_action(self, mocked): CONF.set_default('auth_enable', True, group='pecan') mock_endpoint = mock.Mock() mock_endpoint.endpoint = 'http://testendpoint.com:8989/v2' mocked.return_value = {'auth': mock_endpoint, 'session': None} mock_ctx = mock.Mock() action_class = actions.MistralAction params = {'identifier': '1234-abcd'} action = action_class(**params) client = action._get_client(mock_ctx) self.assertEqual(client.workbooks.http_client.base_url, mock_endpoint.endpoint) def test_standalone_mistral_action(self): CONF.set_default('auth_enable', False, group='pecan') mock_ctx = mock.Mock() action_class = actions.MistralAction params = {'identifier': '1234-abcd'} action = action_class(**params) client = action._get_client(mock_ctx) base_url = 'http://{}:{}/v2'.format(CONF.api.host, CONF.api.port) self.assertEqual(client.workbooks.http_client.base_url, base_url) @mock.patch.object(actions.SwiftAction, '_get_client') def test_swift_action(self, mocked): mock_ctx = mock.Mock() method_name = "get_object" action_class = actions.SwiftAction action_class.client_method_name = method_name params = {'container': 'foo', 'object': 'bar'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().get_object.called) mocked().get_object.assert_called_once_with(container='foo', object='bar') @mock.patch.object(actions.ZaqarAction, '_get_client') def test_zaqar_action(self, mocked): mock_ctx = mock.Mock() method_name = "queue_messages" action_class = actions.ZaqarAction action_class.client_method_name = method_name params = {'queue_name': 'foo'} action = action_class(**params) action.run(mock_ctx) mocked().queue.assert_called_once_with('foo') mocked().queue().messages.assert_called_once_with() @mock.patch.object(actions.BarbicanAction, '_get_client') def test_barbican_action(self, mocked): mock_ctx = mock.Mock() method_name = "orders_list" action_class = actions.BarbicanAction action_class.client_method_name = method_name params = {'limit': 5} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().orders_list.called) mocked().orders_list.assert_called_once_with(limit=5) @mock.patch.object(actions.DesignateAction, '_get_client') def test_designate_action(self, mocked): mock_ctx = mock.Mock() method_name = "domain.get" action_class = actions.DesignateAction action_class.client_method_name = method_name params = {'domain': 'example.com'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().domain.get.called) mocked().domain.get.assert_called_once_with(domain="example.com") @mock.patch.object(actions.MagnumAction, '_get_client') def test_magnum_action(self, mocked): mock_ctx = mock.Mock() method_name = "baymodels.get" action_class = actions.MagnumAction action_class.client_method_name = method_name params = {'id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().baymodels.get.called) mocked().baymodels.get.assert_called_once_with(id="1234-abcd") @mock.patch.object(actions.MuranoAction, '_get_client') def test_murano_action(self, mocked): mock_ctx = mock.Mock() method_name = "categories.get" action_class = actions.MuranoAction action_class.client_method_name = method_name params = {'category_id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().categories.get.called) mocked().categories.get.assert_called_once_with( category_id="1234-abcd" ) @mock.patch.object(actions.TackerAction, '_get_client') def test_tacker_action(self, mocked): mock_ctx = mock.Mock() method_name = "show_vim" action_class = actions.TackerAction action_class.client_method_name = method_name params = {'vim_id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().show_vim.called) mocked().show_vim.assert_called_once_with( vim_id="1234-abcd" ) @mock.patch.object(actions.SenlinAction, '_get_client') def test_senlin_action(self, mocked): mock_ctx = mock.Mock() action_class = actions.SenlinAction action_class.client_method_name = "get_cluster" action = action_class(cluster_id='1234-abcd') action.run(mock_ctx) self.assertTrue(mocked().get_cluster.called) mocked().get_cluster.assert_called_once_with( cluster_id="1234-abcd" ) @mock.patch.object(actions.AodhAction, '_get_client') def test_aodh_action(self, mocked): mock_ctx = mock.Mock() method_name = "alarm.get" action_class = actions.AodhAction action_class.client_method_name = method_name params = {'alarm_id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().alarm.get.called) mocked().alarm.get.assert_called_once_with(alarm_id="1234-abcd") @mock.patch.object(actions.GnocchiAction, '_get_client') def test_gnocchi_action(self, mocked): mock_ctx = mock.Mock() method_name = "metric.get" action_class = actions.GnocchiAction action_class.client_method_name = method_name params = {'metric_id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().metric.get.called) mocked().metric.get.assert_called_once_with(metric_id="1234-abcd") @mock.patch.object(actions.GlareAction, '_get_client') def test_glare_action(self, mocked): mock_ctx = mock.Mock() method_name = "artifacts.get" action_class = actions.GlareAction action_class.client_method_name = method_name params = {'artifact_id': '1234-abcd'} action = action_class(**params) action.run(mock_ctx) self.assertTrue(mocked().artifacts.get.called) mocked().artifacts.get.assert_called_once_with(artifact_id="1234-abcd") mistral-6.0.0/mistral/tests/unit/actions/openstack/test_generator.py0000666000175100017510000001621213245513261026044 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import os from oslo_config import cfg import mock from mistral.actions import generator_factory from mistral.actions.openstack.action_generator import base as generator_base from mistral.actions.openstack import actions from mistral import config from mistral.tests.unit import base ABSOLUTE_TEST_MAPPING_PATH = os.path.realpath( os.path.join(os.path.dirname(__file__), "../../../resources/openstack/test_mapping.json") ) RELATIVE_TEST_MAPPING_PATH = "tests/resources/openstack/test_mapping.json" MODULE_MAPPING = { 'nova': ['nova.servers_get', actions.NovaAction], 'glance': ['glance.images_list', actions.GlanceAction], 'keystone': ['keystone.users_create', actions.KeystoneAction], 'heat': ['heat.stacks_list', actions.HeatAction], 'neutron': ['neutron.show_network', actions.NeutronAction], 'cinder': ['cinder.volumes_list', actions.CinderAction], 'trove': ['trove.instances_list', actions.TroveAction], 'ironic': ['ironic.node_list', actions.IronicAction], 'baremetal_introspection': ['baremetal_introspection.introspect', actions.BaremetalIntrospectionAction], 'swift': ['swift.head_account', actions.SwiftAction], 'zaqar': ['zaqar.queue_messages', actions.ZaqarAction], 'barbican': ['barbican.orders_list', actions.BarbicanAction], 'mistral': ['mistral.workflows_get', actions.MistralAction], 'designate': ['designate.domains_list', actions.DesignateAction], 'magnum': ['magnum.bays_list', actions.MagnumAction], 'murano': ['murano.deployments_list', actions.MuranoAction], 'tacker': ['tacker.list_vims', actions.TackerAction], 'senlin': ['senlin.get_profile', actions.SenlinAction], 'aodh': ['aodh.alarm_list', actions.AodhAction], 'gnocchi': ['gnocchi.metric_list', actions.GnocchiAction], 'glare': ['glare.artifacts_list', actions.GlareAction] } EXTRA_MODULES = ['neutron', 'swift', 'zaqar', 'tacker'] CONF = cfg.CONF CONF.register_opt(config.os_actions_mapping_path) class GeneratorTest(base.BaseTest): def setUp(self): super(GeneratorTest, self).setUp() # The baremetal inspector client expects the service to be running # when it is initialised and attempts to connect. This mocks out this # service only and returns a simple function that can be used by the # inspection utils. self.baremetal_patch = mock.patch.object( actions.BaremetalIntrospectionAction, "get_fake_client_method", return_value=lambda x: None) self.baremetal_patch.start() self.addCleanup(self.baremetal_patch.stop) def test_generator(self): for generator_cls in generator_factory.all_generators(): action_classes = generator_cls.create_actions() action_name = MODULE_MAPPING[generator_cls.action_namespace][0] action_cls = MODULE_MAPPING[generator_cls.action_namespace][1] method_name_pre = action_name.split('.')[1] method_name = ( method_name_pre if generator_cls.action_namespace in EXTRA_MODULES else method_name_pre.replace('_', '.') ) action = self._assert_single_item( action_classes, name=action_name ) self.assertTrue(issubclass(action['class'], action_cls)) self.assertEqual(method_name, action['class'].client_method_name) modules = CONF.openstack_actions.modules_support_region if generator_cls.action_namespace in modules: self.assertIn('action_region', action['arg_list']) def test_missing_module_from_mapping(self): with _patch_openstack_action_mapping_path(RELATIVE_TEST_MAPPING_PATH): for generator_cls in generator_factory.all_generators(): action_classes = generator_cls.create_actions() action_names = [action['name'] for action in action_classes] cls = MODULE_MAPPING.get(generator_cls.action_namespace)[1] if cls == actions.NovaAction: self.assertIn('nova.servers_get', action_names) self.assertEqual(3, len(action_names)) elif cls not in (actions.GlanceAction, actions.KeystoneAction): self.assertEqual([], action_names) def test_absolute_mapping_path(self): with _patch_openstack_action_mapping_path(ABSOLUTE_TEST_MAPPING_PATH): self.assertTrue(os.path.isabs(ABSOLUTE_TEST_MAPPING_PATH), "Mapping path is relative: %s" % ABSOLUTE_TEST_MAPPING_PATH) for generator_cls in generator_factory.all_generators(): action_classes = generator_cls.create_actions() action_names = [action['name'] for action in action_classes] cls = MODULE_MAPPING.get(generator_cls.action_namespace)[1] if cls == actions.NovaAction: self.assertIn('nova.servers_get', action_names) self.assertEqual(3, len(action_names)) elif cls not in (actions.GlanceAction, actions.KeystoneAction): self.assertEqual([], action_names) def test_prepare_action_inputs(self): inputs = generator_base.OpenStackActionGenerator.prepare_action_inputs( 'a,b,c', added=['region=RegionOne'] ) self.assertEqual('a, b, c, region=RegionOne', inputs) inputs = generator_base.OpenStackActionGenerator.prepare_action_inputs( 'a,b,c=1', added=['region=RegionOne'] ) self.assertEqual('a, b, region=RegionOne, c=1', inputs) inputs = generator_base.OpenStackActionGenerator.prepare_action_inputs( 'a,b,c=1,**kwargs', added=['region=RegionOne'] ) self.assertEqual('a, b, region=RegionOne, c=1, **kwargs', inputs) inputs = generator_base.OpenStackActionGenerator.prepare_action_inputs( '**kwargs', added=['region=RegionOne'] ) self.assertEqual('region=RegionOne, **kwargs', inputs) inputs = generator_base.OpenStackActionGenerator.prepare_action_inputs( '', added=['region=RegionOne'] ) self.assertEqual('region=RegionOne', inputs) @contextlib.contextmanager def _patch_openstack_action_mapping_path(path): original_path = CONF.openstack_actions_mapping_path CONF.set_default("openstack_actions_mapping_path", path) yield CONF.set_default("openstack_actions_mapping_path", original_path) mistral-6.0.0/mistral/tests/unit/actions/openstack/__init__.py0000666000175100017510000000000013245513261024542 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/actions/test_std_email_action.py0000666000175100017510000001342513245513261025370 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import base64 from email.header import decode_header from email import parser import mock import six import testtools from mistral.actions import std_actions as std from mistral import exceptions as exc from mistral.tests.unit import base """ To try against a real SMTP server: 1) set LOCAL_SMTPD = True run debug smtpd on the local machine: `sudo python -m smtpd -c DebuggingServer -n localhost:25` Debugging server doesn't support password. 2) set REMOTE_SMTP = True use external SMTP (like gmail), change the configuration, provide actual username and password self.settings = { 'host': 'smtp.gmail.com:587', 'from': 'youraccount@gmail.com', 'password': 'secret' } """ LOCAL_SMTPD = False REMOTE_SMTP = False class SendEmailActionTest(base.BaseTest): def setUp(self): super(SendEmailActionTest, self).setUp() self.to_addrs = ["dz@example.com", "deg@example.com", "xyz@example.com"] self.subject = "Multi word subject Ñ Ñ€ÑƒÑÑкими буквами" self.body = "short multiline\nbody\nc руÑÑкими буквами" self.smtp_server = 'mail.example.com:25' self.from_addr = "bot@example.com" self.to_addrs_str = ", ".join(self.to_addrs) self.ctx = mock.Mock() @testtools.skipIf(not LOCAL_SMTPD, "Setup local smtpd to run it") def test_send_email_real(self): action = std.SendEmailAction( self.from_addr, self.to_addrs, self.smtp_server, None, self.subject, self.body ) action.run(self.ctx) @testtools.skipIf(not REMOTE_SMTP, "Configure Remote SMTP to run it") def test_with_password_real(self): self.to_addrs = ["dz@stackstorm.com"] self.smtp_server = 'mail.example.com:25' self.from_addr = "bot@example.com" self.smtp_password = 'secret' action = std.SendEmailAction( self.from_addr, self.to_addrs, self.smtp_server, self.smtp_password, self.subject, self.body ) action.run(self.ctx) @mock.patch('smtplib.SMTP') def test_with_mutli_to_addrs(self, smtp): smtp_password = "secret" action = std.SendEmailAction( self.from_addr, self.to_addrs, self.smtp_server, smtp_password, self.subject, self.body ) action.run(self.ctx) @mock.patch('smtplib.SMTP') def test_with_one_to_addr(self, smtp): to_addr = ["dz@example.com"] smtp_password = "secret" action = std.SendEmailAction( self.from_addr, to_addr, self.smtp_server, smtp_password, self.subject, self.body ) action.run(self.ctx) @mock.patch('smtplib.SMTP') def test_send_email(self, smtp): action = std.SendEmailAction( self.from_addr, self.to_addrs, self.smtp_server, None, self.subject, self.body ) action.run(self.ctx) smtp.assert_called_once_with(self.smtp_server) sendmail = smtp.return_value.sendmail self.assertTrue(sendmail.called, "should call sendmail") self.assertEqual( self.from_addr, sendmail.call_args[1]['from_addr']) self.assertEqual( self.to_addrs, sendmail.call_args[1]['to_addrs']) message = parser.Parser().parsestr(sendmail.call_args[1]['msg']) self.assertEqual(self.from_addr, message['from']) self.assertEqual(self.to_addrs_str, message['to']) if six.PY3: self.assertEqual( self.subject, decode_header(message['subject'])[0][0].decode('utf-8') ) else: self.assertEqual( self.subject.decode('utf-8'), decode_header(message['subject'])[0][0].decode('utf-8') ) if six.PY3: self.assertEqual( self.body, base64.b64decode(message.get_payload()).decode('utf-8') ) else: self.assertEqual( self.body.decode('utf-8'), base64.b64decode(message.get_payload()).decode('utf-8') ) @mock.patch('smtplib.SMTP') def test_with_password(self, smtp): self.smtp_password = "secret" action = std.SendEmailAction( self.from_addr, self.to_addrs, self.smtp_server, self.smtp_password, self.subject, self.body ) action.run(self.ctx) smtpmock = smtp.return_value calls = [mock.call.ehlo(), mock.call.starttls(), mock.call.ehlo(), mock.call.login(self.from_addr, self.smtp_password)] smtpmock.assert_has_calls(calls) self.assertTrue(smtpmock.sendmail.called, "should call sendmail") @mock.patch('mistral.actions.std_actions.LOG') def test_exception(self, log): self.smtp_server = "wrong host" action = std.SendEmailAction( self.from_addr, self.to_addrs, self.smtp_server, None, self.subject, self.body ) try: action.run(self.ctx) except exc.ActionException: pass else: self.assertFalse("Must throw exception.") mistral-6.0.0/mistral/tests/unit/actions/test_std_echo_action.py0000666000175100017510000000165013245513261025214 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.actions import std_actions as std from mistral.tests.unit import base import mock class EchoActionTest(base.BaseTest): def test_fake_action(self): expected = "my output" mock_ctx = mock.Mock() action = std.EchoAction(expected) self.assertEqual(expected, action.run(mock_ctx)) mistral-6.0.0/mistral/tests/unit/actions/test_javascript_action.py0000666000175100017510000000205413245513261025571 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from mistral.actions import std_actions as std from mistral.tests.unit import base from mistral.utils import javascript class JavascriptActionTest(base.BaseTest): @mock.patch.object( javascript, 'evaluate', mock.Mock(return_value="3") ) def test_js_action(self): mock_ctx = mock.Mock() script = "return 1 + 2" action = std.JavaScriptAction(script) self.assertEqual("3", action.run(mock_ctx)) mistral-6.0.0/mistral/tests/unit/actions/test_std_fail_action.py0000666000175100017510000000162513245513272025215 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from mistral.actions import std_actions as std from mistral import exceptions as exc from mistral.tests.unit import base class FailActionTest(base.BaseTest): def test_fail_action(self): action = std.FailAction() self.assertRaises(exc.ActionException, action.run, mock.Mock) mistral-6.0.0/mistral/tests/unit/actions/__init__.py0000666000175100017510000000000013245513261022553 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/rpc/0000775000175100017510000000000013245513604017577 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/rpc/__init__.py0000666000175100017510000000000013245513262021700 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/rpc/kombu/0000775000175100017510000000000013245513604020714 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/rpc/kombu/test_kombu_listener.py0000666000175100017510000001530713245513262025357 0ustar zuulzuul00000000000000# Copyright (c) 2017 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mistral import exceptions as exc from mistral.tests.unit.rpc.kombu import base from mistral.tests.unit.rpc.kombu import fake_kombu from mistral import utils import mock from six import moves with mock.patch.dict('sys.modules', kombu=fake_kombu): from mistral.rpc.kombu import base as kombu_base from mistral.rpc.kombu import kombu_listener class TestException(exc.MistralException): pass class KombuListenerTestCase(base.KombuTestCase): def setUp(self): super(KombuListenerTestCase, self).setUp() self.listener = kombu_listener.KombuRPCListener( [mock.MagicMock()], mock.MagicMock() ) self.ctx = type('context', (object,), {'to_dict': lambda self: {}})() def test_add_listener(self): correlation_id = utils.generate_unicode_uuid() self.listener.add_listener(correlation_id) self.assertEqual( type(self.listener._results.get(correlation_id)), moves.queue.Queue ) self.assertEqual(0, self.listener._results[correlation_id].qsize()) def test_remove_listener_correlation_id_in_results(self): correlation_id = utils.generate_unicode_uuid() self.listener.add_listener(correlation_id) self.assertEqual( type(self.listener._results.get(correlation_id)), moves.queue.Queue ) self.listener.remove_listener(correlation_id) self.assertIsNone( self.listener._results.get(correlation_id) ) def test_remove_listener_correlation_id_not_in_results(self): correlation_id = utils.generate_unicode_uuid() self.listener.add_listener(correlation_id) self.assertEqual( type(self.listener._results.get(correlation_id)), moves.queue.Queue ) self.listener.remove_listener(utils.generate_unicode_uuid()) self.assertEqual( type(self.listener._results.get(correlation_id)), moves.queue.Queue ) @mock.patch('threading.Thread') def test_start_thread_not_set(self, thread_class_mock): thread_mock = mock.MagicMock() thread_class_mock.return_value = thread_mock self.listener.start() self.assertTrue(thread_mock.daemon) self.assertEqual(thread_mock.start.call_count, 1) @mock.patch('threading.Thread') def test_start_thread_set(self, thread_class_mock): thread_mock = mock.MagicMock() thread_class_mock.return_value = thread_mock self.listener._thread = mock.MagicMock() self.listener.start() self.assertEqual(thread_mock.start.call_count, 0) def test_get_result_results_in_queue(self): expected_result = 'abcd' correlation_id = utils.generate_unicode_uuid() self.listener.add_listener(correlation_id) self.listener._results.get(correlation_id).put(expected_result) result = self.listener.get_result(correlation_id, 5) self.assertEqual(result, expected_result) def test_get_result_not_in_queue(self): correlation_id = utils.generate_unicode_uuid() self.listener.add_listener(correlation_id) self.assertRaises( moves.queue.Empty, self.listener.get_result, correlation_id, 1 # timeout ) def test_get_result_lack_of_queue(self): correlation_id = utils.generate_unicode_uuid() self.assertRaises( KeyError, self.listener.get_result, correlation_id, 1 # timeout ) def test__on_response_message_ack_fail(self): message = mock.MagicMock() message.ack.side_effect = Exception('Test Exception') response = 'response' kombu_listener.LOG = mock.MagicMock() self.listener.on_message(response, message) self.assertEqual(kombu_listener.LOG.debug.call_count, 1) self.assertEqual(kombu_listener.LOG.exception.call_count, 1) def test__on_response_message_ack_ok_corr_id_not_match(self): message = mock.MagicMock() message.properties = mock.MagicMock() message.properties.__getitem__ = lambda *args, **kwargs: True response = 'response' kombu_listener.LOG = mock.MagicMock() self.listener.on_message(response, message) self.assertEqual(kombu_listener.LOG.debug.call_count, 3) self.assertEqual(kombu_listener.LOG.exception.call_count, 0) def test__on_response_message_ack_ok_messsage_type_error(self): correlation_id = utils.generate_unicode_uuid() message = mock.MagicMock() message.properties = dict() message.properties['type'] = 'error' message.properties['correlation_id'] = correlation_id response = TestException('response') kombu_listener.LOG = mock.MagicMock() self.listener.add_listener(correlation_id) self.listener.on_message(response, message) self.assertEqual(kombu_listener.LOG.debug.call_count, 2) self.assertEqual(kombu_listener.LOG.exception.call_count, 0) result = self.listener.get_result(correlation_id, 5) self.assertDictEqual( result, { kombu_base.TYPE: 'error', kombu_base.RESULT: response } ) def test__on_response_message_ack_ok(self): correlation_id = utils.generate_unicode_uuid() message = mock.MagicMock() message.properties = dict() message.properties['type'] = None message.properties['correlation_id'] = correlation_id response = 'response' kombu_listener.LOG = mock.MagicMock() self.listener.add_listener(correlation_id) self.listener.on_message(response, message) self.assertEqual(kombu_listener.LOG.debug.call_count, 2) self.assertEqual(kombu_listener.LOG.exception.call_count, 0) result = self.listener.get_result(correlation_id, 5) self.assertDictEqual( result, { kombu_base.TYPE: None, kombu_base.RESULT: response } ) mistral-6.0.0/mistral/tests/unit/rpc/kombu/fake_kombu.py0000666000175100017510000000236213245513262023376 0ustar zuulzuul00000000000000# Copyright (c) 2016 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from kombu import mixins as mx import mock # Hack for making tests works with kombu listener mixins = mx producer = mock.MagicMock() producers = mock.MagicMock() producers.__getitem__ = lambda *args, **kwargs: producer connection = mock.MagicMock() connections = mock.MagicMock() connections.__getitem__ = lambda *args, **kwargs: connection serialization = mock.MagicMock() def BrokerConnection(*args, **kwargs): return mock.MagicMock() def Exchange(*args, **kwargs): return mock.MagicMock() def Queue(*args, **kwargs): return mock.MagicMock() def Consumer(*args, **kwargs): return mock.MagicMock() mistral-6.0.0/mistral/tests/unit/rpc/kombu/test_kombu_server.py0000666000175100017510000002302013245513262025027 0ustar zuulzuul00000000000000# Copyright (c) 2016 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mistral import context from mistral import exceptions as exc from mistral.tests.unit.rpc.kombu import base from mistral.tests.unit.rpc.kombu import fake_kombu import mock import socket from stevedore import driver with mock.patch.dict('sys.modules', kombu=fake_kombu): from mistral.rpc.kombu import kombu_server class TestException(exc.MistralError): pass class KombuServerTestCase(base.KombuTestCase): def setUp(self): super(KombuServerTestCase, self).setUp() self.conf = mock.MagicMock() self.server = kombu_server.KombuRPCServer(self.conf) self.ctx = type('context', (object,), {'to_dict': lambda self: {}})() def test_is_running_is_running(self): self.server._running.set() self.assertTrue(self.server.is_running) def test_is_running_is_not_running(self): self.server._running.clear() self.assertFalse(self.server.is_running) def test_stop(self): self.server.stop() self.assertFalse(self.server.is_running) def test_publish_message(self): body = 'body' reply_to = 'reply_to' corr_id = 'corr_id' type = 'type' acquire_mock = mock.MagicMock() fake_kombu.producer.acquire.return_value = acquire_mock enter_mock = mock.MagicMock() acquire_mock.__enter__.return_value = enter_mock self.server.publish_message(body, reply_to, corr_id, type) enter_mock.publish.assert_called_once_with( body={'body': '"body"'}, exchange='openstack', routing_key=reply_to, correlation_id=corr_id, type=type, serializer='json' ) def test_run_launch_successfully(self): acquire_mock = mock.MagicMock() acquire_mock.drain_events.side_effect = TestException() fake_kombu.connection.acquire.return_value = acquire_mock self.assertRaises(TestException, self.server._run, 'blocking') self.assertTrue(self.server.is_running) def test_run_launch_successfully_than_stop(self): def side_effect(*args, **kwargs): self.assertTrue(self.server.is_running) raise KeyboardInterrupt acquire_mock = mock.MagicMock() acquire_mock.drain_events.side_effect = side_effect fake_kombu.connection.acquire.return_value = acquire_mock self.server._run('blocking') self.assertFalse(self.server.is_running) self.assertEqual(self.server._sleep_time, 1) def test_run_socket_error_reconnect(self): def side_effect(*args, **kwargs): if acquire_mock.drain_events.call_count == 1: raise socket.error() raise TestException() acquire_mock = mock.MagicMock() acquire_mock.drain_events.side_effect = side_effect fake_kombu.connection.acquire.return_value = acquire_mock self.assertRaises(TestException, self.server._run, 'blocking') self.assertEqual(self.server._sleep_time, 1) def test_run_socket_timeout_still_running(self): def side_effect(*args, **kwargs): if acquire_mock.drain_events.call_count == 0: raise socket.timeout() raise TestException() acquire_mock = mock.MagicMock() acquire_mock.drain_events.side_effect = side_effect fake_kombu.connection.acquire.return_value = acquire_mock self.assertRaises( TestException, self.server._run, 'blocking' ) self.assertTrue(self.server.is_running) def test_run_keyboard_interrupt_not_running(self): acquire_mock = mock.MagicMock() acquire_mock.drain_events.side_effect = KeyboardInterrupt() fake_kombu.connection.acquire.return_value = acquire_mock self.assertIsNone(self.server.run()) self.assertFalse(self.server.is_running) @mock.patch.object( kombu_server.KombuRPCServer, '_on_message', mock.MagicMock() ) @mock.patch.object(kombu_server.KombuRPCServer, 'publish_message') def test__on_message_safe_message_processing_ok(self, publish_message): message = mock.MagicMock() self.server._on_message_safe(None, message) self.assertEqual(message.ack.call_count, 1) self.assertEqual(publish_message.call_count, 0) @mock.patch.object(kombu_server.KombuRPCServer, '_on_message') @mock.patch.object(kombu_server.KombuRPCServer, 'publish_message') def test__on_message_safe_message_processing_raise( self, publish_message, _on_message ): reply_to = 'reply_to' correlation_id = 'corr_id' message = mock.MagicMock() message.properties = { 'reply_to': reply_to, 'correlation_id': correlation_id } test_exception = TestException() _on_message.side_effect = test_exception self.server._on_message_safe(None, message) self.assertEqual(message.ack.call_count, 1) self.assertEqual(publish_message.call_count, 1) @mock.patch.object( kombu_server.KombuRPCServer, '_get_rpc_method', mock.MagicMock(return_value=None) ) def test__on_message_rpc_method_not_found(self): request = { 'rpc_ctx': {}, 'rpc_method': 'not_found_method', 'arguments': {} } message = mock.MagicMock() message.properties = { 'reply_to': None, 'correlation_id': None } self.assertRaises( exc.MistralException, self.server._on_message, request, message ) @mock.patch.object(kombu_server.KombuRPCServer, 'publish_message') @mock.patch.object(kombu_server.KombuRPCServer, '_get_rpc_method') @mock.patch('mistral.context.MistralContext.from_dict') def test__on_message_is_async(self, mock_get_context, get_rpc_method, publish_message): result = 'result' request = { 'async': True, 'rpc_ctx': {}, 'rpc_method': 'found_method', 'arguments': self.server._serialize_message({ 'a': 1, 'b': 2 }) } message = mock.MagicMock() message.properties = { 'reply_to': None, 'correlation_id': None } message.delivery_info.get.return_value = False rpc_method = mock.MagicMock(return_value=result) get_rpc_method.return_value = rpc_method ctx = context.MistralContext() mock_get_context.return_value = ctx self.server._on_message(request, message) rpc_method.assert_called_once_with( rpc_ctx=ctx, a=1, b=2 ) self.assertEqual(publish_message.call_count, 0) @mock.patch.object(kombu_server.KombuRPCServer, 'publish_message') @mock.patch.object(kombu_server.KombuRPCServer, '_get_rpc_method') @mock.patch('mistral.context.MistralContext.from_dict') def test__on_message_is_sync(self, mock_get_context, get_rpc_method, publish_message): result = 'result' request = { 'async': False, 'rpc_ctx': {}, 'rpc_method': 'found_method', 'arguments': self.server._serialize_message({ 'a': 1, 'b': 2 }) } reply_to = 'reply_to' correlation_id = 'corr_id' message = mock.MagicMock() message.properties = { 'reply_to': reply_to, 'correlation_id': correlation_id } message.delivery_info.get.return_value = False rpc_method = mock.MagicMock(return_value=result) get_rpc_method.return_value = rpc_method ctx = context.MistralContext() mock_get_context.return_value = ctx self.server._on_message(request, message) rpc_method.assert_called_once_with( rpc_ctx=ctx, a=1, b=2 ) publish_message.assert_called_once_with( result, reply_to, correlation_id ) @mock.patch('stevedore.driver.DriverManager') def test__prepare_worker(self, driver_manager_mock): worker_mock = mock.MagicMock() mgr_mock = mock.MagicMock() mgr_mock.driver.return_value = worker_mock def side_effect(*args, **kwargs): return mgr_mock driver_manager_mock.side_effect = side_effect self.server._prepare_worker('blocking') self.assertEqual(self.server._worker, worker_mock) @mock.patch('stevedore.driver.DriverManager') def test__prepare_worker_no_valid_executor(self, driver_manager_mock): driver_manager_mock.side_effect = driver.NoMatches() self.assertRaises( driver.NoMatches, self.server._prepare_worker, 'non_valid_executor' ) mistral-6.0.0/mistral/tests/unit/rpc/kombu/base.py0000666000175100017510000000172013245513262022202 0ustar zuulzuul00000000000000# Copyright (c) 2016 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mistral import config as cfg from mistral.rpc.kombu import base as kombu_base from mistral.tests.unit import base class KombuTestCase(base.BaseTest): def setUp(self): super(KombuTestCase, self).setUp() kombu_base.set_transport_options(check_backend=False) cfg.CONF.set_default('rpc_backend', 'kombu') mistral-6.0.0/mistral/tests/unit/rpc/kombu/__init__.py0000666000175100017510000000000013245513262023015 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/rpc/kombu/test_kombu_client.py0000666000175100017510000000563213245513262025010 0ustar zuulzuul00000000000000# Copyright (c) 2016 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mistral import exceptions as exc from mistral.tests.unit.rpc.kombu import base from mistral.tests.unit.rpc.kombu import fake_kombu import mock from six import moves with mock.patch.dict('sys.modules', kombu=fake_kombu): from mistral.rpc.kombu import base as kombu_base from mistral.rpc.kombu import kombu_client class TestException(exc.MistralException): pass class KombuClientTestCase(base.KombuTestCase): _RESPONSE = "response" def setUp(self): super(KombuClientTestCase, self).setUp() conf = mock.MagicMock() listener_class = kombu_client.kombu_listener.KombuRPCListener kombu_client.kombu_listener.KombuRPCListener = mock.MagicMock() def restore_listener(): kombu_client.kombu_listener.KombuRPCListener = listener_class self.addCleanup(restore_listener) self.client = kombu_client.KombuRPCClient(conf) self.ctx = type( 'context', (object,), {'to_dict': lambda self: {}} )() def test_sync_call_result_get(self): self.client._listener.get_result = mock.MagicMock( return_value={ kombu_base.TYPE: None, kombu_base.RESULT: self.client._serialize_message({ 'body': self._RESPONSE }) } ) response = self.client.sync_call(self.ctx, 'method') self.assertEqual(response, self._RESPONSE) def test_sync_call_result_not_get(self): self.client._listener.get_result = mock.MagicMock( side_effect=moves.queue.Empty ) self.assertRaises( exc.MistralException, self.client.sync_call, self.ctx, 'method_not_found' ) def test_sync_call_result_type_error(self): def side_effect(*args, **kwargs): return { kombu_base.TYPE: 'error', kombu_base.RESULT: TestException() } self.client._wait_for_result = mock.MagicMock(side_effect=side_effect) self.assertRaises( TestException, self.client.sync_call, self.ctx, 'method' ) def test_async_call(self): self.assertIsNone(self.client.async_call(self.ctx, 'method')) mistral-6.0.0/mistral/tests/unit/api/0000775000175100017510000000000013245513604017564 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/api/test_access_control.py0000666000175100017510000000426513245513261024206 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mistral.api import access_control as acl from mistral import exceptions as exc from mistral.tests.unit import base from mistral.tests.unit.mstrlfixtures import policy_fixtures class PolicyTestCase(base.BaseTest): """Tests whether the configuration of the policy engine is correct.""" def setUp(self): super(PolicyTestCase, self).setUp() self.policy = self.useFixture(policy_fixtures.PolicyFixture()) rules = { "example:admin": "rule:admin_only", "example:admin_or_owner": "rule:admin_or_owner" } self.policy.register_rules(rules) def test_admin_api_allowed(self): auth_ctx = base.get_context(default=True, admin=True) self.assertTrue( acl.enforce('example:admin', auth_ctx, auth_ctx.to_dict()) ) def test_admin_api_disallowed(self): auth_ctx = base.get_context(default=True) self.assertRaises( exc.NotAllowedException, acl.enforce, 'example:admin', auth_ctx, auth_ctx.to_dict() ) def test_admin_or_owner_api_allowed(self): auth_ctx = base.get_context(default=True) self.assertTrue( acl.enforce('example:admin_or_owner', auth_ctx, auth_ctx.to_dict()) ) def test_admin_or_owner_api_disallowed(self): auth_ctx = base.get_context(default=True) target = {'project_id': 'another'} self.assertRaises( exc.NotAllowedException, acl.enforce, 'example:admin_or_owner', auth_ctx, target ) mistral-6.0.0/mistral/tests/unit/api/v2/0000775000175100017510000000000013245513604020113 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/api/v2/test_tasks.py0000666000175100017510000003264413245513261022663 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import json import mock import sqlalchemy as sa from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.rpc import clients as rpc from mistral.tests.unit.api import base from mistral.workflow import data_flow from mistral.workflow import states # TODO(everyone): later we need additional tests verifying all the errors etc. RESULT = {"some": "result"} PUBLISHED = {"var": "val"} RUNTIME_CONTEXT = { 'triggered_by': [ { 'task_id': '123-123-123', 'event': 'on-success' } ] } WF_EX = models.WorkflowExecution( id='abc', workflow_name='some', description='execution description.', spec={'name': 'some'}, state=states.RUNNING, state_info=None, input={'foo': 'bar'}, output={}, params={'env': {'k1': 'abc'}}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) TASK_EX = models.TaskExecution( id='123', name='task', workflow_name='flow', workflow_id='123e4567-e89b-12d3-a456-426655441111', spec={ 'type': 'direct', 'version': '2.0', 'name': 'task' }, action_spec={}, state=states.RUNNING, tags=['a', 'b'], in_context={}, runtime_context=RUNTIME_CONTEXT, workflow_execution_id=WF_EX.id, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), published=PUBLISHED, processed=True ) WITH_ITEMS_TASK_EX = models.TaskExecution( id='123', name='task', workflow_name='flow', workflow_id='123e4567-e89b-12d3-a456-426655441111', spec={ 'type': 'direct', 'version': '2.0', 'name': 'task', 'with-items': 'var in [1, 2, 3]' }, action_spec={}, state=states.RUNNING, tags=['a', 'b'], in_context={}, runtime_context=RUNTIME_CONTEXT, workflow_execution_id=WF_EX.id, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), published=PUBLISHED, processed=True ) TASK = { 'id': '123', 'name': 'task', 'workflow_name': 'flow', 'workflow_id': '123e4567-e89b-12d3-a456-426655441111', 'state': 'RUNNING', 'workflow_execution_id': WF_EX.id, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'result': json.dumps(RESULT), 'published': json.dumps(PUBLISHED), 'runtime_context': json.dumps(RUNTIME_CONTEXT), 'processed': True } TASK_WITHOUT_RESULT = copy.deepcopy(TASK) del TASK_WITHOUT_RESULT['result'] UPDATED_TASK_EX = copy.deepcopy(TASK_EX) UPDATED_TASK_EX['state'] = 'SUCCESS' UPDATED_TASK = copy.deepcopy(TASK) UPDATED_TASK['state'] = 'SUCCESS' ERROR_TASK_EX = copy.deepcopy(TASK_EX) ERROR_TASK_EX['state'] = 'ERROR' ERROR_ITEMS_TASK_EX = copy.deepcopy(WITH_ITEMS_TASK_EX) ERROR_ITEMS_TASK_EX['state'] = 'ERROR' ERROR_TASK = copy.deepcopy(TASK) ERROR_TASK['state'] = 'ERROR' BROKEN_TASK = copy.deepcopy(TASK) RERUN_TASK = { 'id': '123', 'state': 'RUNNING' } MOCK_WF_EX = mock.MagicMock(return_value=WF_EX) MOCK_TASK = mock.MagicMock(return_value=TASK_EX) MOCK_TASKS = mock.MagicMock(return_value=[TASK_EX]) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_ERROR_TASK = mock.MagicMock(return_value=ERROR_TASK_EX) MOCK_ERROR_ITEMS_TASK = mock.MagicMock(return_value=ERROR_ITEMS_TASK_EX) TASK_EX_WITH_PROJECT_ID = TASK_EX.get_clone() TASK_EX_WITH_PROJECT_ID.project_id = '' @mock.patch.object( data_flow, 'get_task_execution_result', mock.Mock(return_value=RESULT) ) class TestTasksController(base.APITest): @mock.patch.object(db_api, 'get_task_execution', MOCK_TASK) def test_get(self): resp = self.app.get('/v2/tasks/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(TASK, resp.json) @mock.patch.object(db_api, 'get_task_execution') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), TASK_EX # Successful run ] resp = self.app.get('/v2/tasks/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(TASK, resp.json) @mock.patch.object(db_api, 'get_task_execution', MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get('/v2/tasks/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, 'get_task_executions', MOCK_TASKS) def test_get_all(self): resp = self.app.get('/v2/tasks') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['tasks'])) self.assertDictEqual(TASK_WITHOUT_RESULT, resp.json['tasks'][0]) @mock.patch.object(db_api, 'get_task_executions') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [TASK_EX] # Successful run ] resp = self.app.get('/v2/tasks') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['tasks'])) self.assertDictEqual(TASK_WITHOUT_RESULT, resp.json['tasks'][0]) @mock.patch.object(db_api, 'get_task_execution', return_value=TASK_EX_WITH_PROJECT_ID) def test_get_within_project_id(self, mock_get): resp = self.app.get('/v2/tasks/123') self.assertEqual(200, resp.status_int) self.assertTrue('project_id' in resp.json) @mock.patch.object(db_api, 'get_task_executions', MOCK_EMPTY) def test_get_all_empty(self): resp = self.app.get('/v2/tasks') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['tasks'])) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object( db_api, 'get_task_execution', mock.MagicMock(side_effect=[ERROR_TASK_EX, TASK_EX]) ) @mock.patch.object(rpc.EngineClient, 'rerun_workflow', MOCK_WF_EX) def test_put(self): params = copy.deepcopy(RERUN_TASK) params['reset'] = True resp = self.app.put_json('/v2/tasks/123', params=params) self.assertEqual(200, resp.status_int) self.assertDictEqual(TASK, resp.json) rpc.EngineClient.rerun_workflow.assert_called_with( TASK_EX.id, reset=params['reset'], env=None ) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object( db_api, 'get_task_execution', mock.MagicMock(side_effect=[ERROR_TASK_EX, TASK_EX]) ) @mock.patch.object(rpc.EngineClient, 'rerun_workflow', MOCK_WF_EX) def test_put_missing_reset(self): params = copy.deepcopy(RERUN_TASK) resp = self.app.put_json( '/v2/tasks/123', params=params, expect_errors=True) self.assertEqual(400, resp.status_int) self.assertIn('faultstring', resp.json) self.assertIn('Mandatory field missing', resp.json['faultstring']) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object( db_api, 'get_task_execution', mock.MagicMock(side_effect=[ERROR_ITEMS_TASK_EX, WITH_ITEMS_TASK_EX]) ) @mock.patch.object(rpc.EngineClient, 'rerun_workflow', MOCK_WF_EX) def test_put_with_items(self): params = copy.deepcopy(RERUN_TASK) params['reset'] = False resp = self.app.put_json('/v2/tasks/123', params=params) self.assertEqual(200, resp.status_int) self.assertDictEqual(TASK, resp.json) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object( db_api, 'get_task_execution', mock.MagicMock(side_effect=[ERROR_TASK_EX, TASK_EX]) ) @mock.patch.object(rpc.EngineClient, 'rerun_workflow', MOCK_WF_EX) def test_put_env(self): params = copy.deepcopy(RERUN_TASK) params['reset'] = True params['env'] = '{"k1": "def"}' resp = self.app.put_json('/v2/tasks/123', params=params) self.assertEqual(200, resp.status_int) self.assertDictEqual(TASK, resp.json) rpc.EngineClient.rerun_workflow.assert_called_with( TASK_EX.id, reset=params['reset'], env=json.loads(params['env']) ) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_TASK) def test_put_current_task_not_in_error(self): params = copy.deepcopy(RERUN_TASK) params['reset'] = True resp = self.app.put_json( '/v2/tasks/123', params=params, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn('faultstring', resp.json) self.assertIn('execution must be in ERROR', resp.json['faultstring']) @mock.patch.object(rpc.EngineClient, 'rerun_workflow', MOCK_WF_EX) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_ERROR_TASK) def test_put_current_task_in_error(self): params = copy.deepcopy(RERUN_TASK) params['reset'] = True params['env'] = '{"k1": "def"}' resp = self.app.put_json('/v2/tasks/123', params=params) self.assertEqual(200, resp.status_int) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_ERROR_TASK) def test_put_invalid_state(self): params = copy.deepcopy(RERUN_TASK) params['state'] = states.IDLE params['reset'] = True resp = self.app.put_json( '/v2/tasks/123', params=params, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn('faultstring', resp.json) self.assertIn('Invalid task state', resp.json['faultstring']) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_ERROR_TASK) def test_put_invalid_reset(self): params = copy.deepcopy(RERUN_TASK) params['reset'] = False resp = self.app.put_json( '/v2/tasks/123', params=params, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn('faultstring', resp.json) self.assertIn('Only with-items task', resp.json['faultstring']) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_ERROR_TASK) def test_put_valid_state(self): params = copy.deepcopy(RERUN_TASK) params['state'] = states.RUNNING params['reset'] = True resp = self.app.put_json( '/v2/tasks/123', params=params ) self.assertEqual(200, resp.status_int) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_ERROR_TASK) def test_put_mismatch_task_name(self): params = copy.deepcopy(RERUN_TASK) params['name'] = 'abc' params['reset'] = True resp = self.app.put_json( '/v2/tasks/123', params=params, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn('faultstring', resp.json) self.assertIn('Task name does not match', resp.json['faultstring']) @mock.patch.object(rpc.EngineClient, 'rerun_workflow', MOCK_WF_EX) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_ERROR_TASK) def test_put_match_task_name(self): params = copy.deepcopy(RERUN_TASK) params['name'] = 'task' params['reset'] = True resp = self.app.put_json( '/v2/tasks/123', params=params, expect_errors=True ) self.assertEqual(200, resp.status_int) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(db_api, 'get_task_execution', MOCK_ERROR_TASK) def test_put_mismatch_workflow_name(self): params = copy.deepcopy(RERUN_TASK) params['workflow_name'] = 'xyz' params['reset'] = True resp = self.app.put_json( '/v2/tasks/123', params=params, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn('faultstring', resp.json) self.assertIn('Workflow name does not match', resp.json['faultstring']) mistral-6.0.0/mistral/tests/unit/api/v2/test_workflows.py0000666000175100017510000005051013245513261023563 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import mock import sqlalchemy as sa from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.tests.unit.api import base from mistral.tests.unit import base as unit_base from mistral import utils WF_DEFINITION = """ --- version: '2.0' flow: type: direct input: - param1 tasks: task1: action: std.echo output="Hi" """ WF_DB = models.WorkflowDefinition( id='123e4567-e89b-12d3-a456-426655440000', name='flow', definition=WF_DEFINITION, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), spec={'input': ['param1']} ) WF_DB_SYSTEM = WF_DB.get_clone() WF_DB_SYSTEM.is_system = True WF = { 'id': '123e4567-e89b-12d3-a456-426655440000', 'name': 'flow', 'definition': WF_DEFINITION, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'input': 'param1' } WF_DB_WITHIN_ABC_NAMESPACE = models.WorkflowDefinition( id='234560fe-162a-4060-a16a-a0d9eee9b408', name='flow', namespace='abc', definition=WF_DEFINITION, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), spec={'input': ['param1']} ) WF_WITH_NAMESPACE = { 'id': '234560fe-162a-4060-a16a-a0d9eee9b408', 'name': 'flow', 'namespace': 'abc', 'definition': WF_DEFINITION, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'input': 'param1' } WF_DEFINITION_WITH_INPUT = """ --- version: '2.0' flow: type: direct input: - param1 - param2: 2 tasks: task1: action: std.echo output="Hi" """ WF_DB_WITH_INPUT = models.WorkflowDefinition( name='flow', definition=WF_DEFINITION_WITH_INPUT, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), spec={'input': ['param1', {'param2': 2}]} ) WF_WITH_DEFAULT_INPUT = { 'name': 'flow', 'definition': WF_DEFINITION_WITH_INPUT, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'input': 'param1, param2=2' } WF_DB_PROJECT_ID = WF_DB.get_clone() WF_DB_PROJECT_ID.project_id = '' UPDATED_WF_DEFINITION = """ --- version: '2.0' flow: type: direct input: - param1 - param2 tasks: task1: action: std.echo output="Hi" """ UPDATED_WF_DB = copy.copy(WF_DB) UPDATED_WF_DB['definition'] = UPDATED_WF_DEFINITION UPDATED_WF = copy.deepcopy(WF) UPDATED_WF['definition'] = UPDATED_WF_DEFINITION WF_DEF_INVALID_MODEL_EXCEPTION = """ --- version: '2.0' flow: type: direct tasks: task1: action: std.echo output="Hi" workflow: wf1 """ WF_DEF_DSL_PARSE_EXCEPTION = """ --- % """ WF_DEF_YAQL_PARSE_EXCEPTION = """ --- version: '2.0' flow: type: direct tasks: task1: action: std.echo output=<% * %> """ WFS_DEFINITION = """ --- version: '2.0' wf1: tasks: task1: action: std.echo output="Hello" wf2: tasks: task1: action: std.echo output="Mistral" """ MOCK_WF = mock.MagicMock(return_value=WF_DB) MOCK_WF_SYSTEM = mock.MagicMock(return_value=WF_DB_SYSTEM) MOCK_WF_WITH_INPUT = mock.MagicMock(return_value=WF_DB_WITH_INPUT) MOCK_WFS = mock.MagicMock(return_value=[WF_DB]) MOCK_UPDATED_WF = mock.MagicMock(return_value=UPDATED_WF_DB) MOCK_DELETE = mock.MagicMock(return_value=None) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_DUPLICATE = mock.MagicMock(side_effect=exc.DBDuplicateEntryError()) class TestWorkflowsController(base.APITest): @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) def test_get(self): resp = self.app.get('/v2/workflows/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(WF, resp.json) @mock.patch.object(db_api, 'get_workflow_definition') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), WF_DB # Successful run ] resp = self.app.get('/v2/workflows/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(WF, resp.json) @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF_WITH_INPUT) def test_get_with_input(self): resp = self.app.get('/v2/workflows/123') self.maxDiff = None self.assertEqual(200, resp.status_int) self.assertDictEqual(WF_WITH_DEFAULT_INPUT, resp.json) @mock.patch.object(db_api, "get_workflow_definition", MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get('/v2/workflows/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object( db_api, "update_workflow_definition", MOCK_UPDATED_WF ) def test_put(self): resp = self.app.put( '/v2/workflows', UPDATED_WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.maxDiff = None self.assertEqual(200, resp.status_int) self.assertDictEqual({'workflows': [UPDATED_WF]}, resp.json) @mock.patch("mistral.services.workflows.update_workflows") def test_put_with_uuid(self, update_mock): update_mock.return_value = [UPDATED_WF_DB] resp = self.app.put( '/v2/workflows/123e4567-e89b-12d3-a456-426655440000', UPDATED_WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) update_mock.assert_called_once_with( UPDATED_WF_DEFINITION, scope='private', identifier='123e4567-e89b-12d3-a456-426655440000', namespace='' ) self.assertDictEqual(UPDATED_WF, resp.json) @mock.patch( "mistral.db.v2.sqlalchemy.api.get_workflow_definition", return_value=WF_DB_SYSTEM ) def test_put_system(self, get_mock): resp = self.app.put( '/v2/workflows', UPDATED_WF_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "Can not modify a system", resp.body.decode() ) @mock.patch.object(db_api, "update_workflow_definition") def test_put_public(self, mock_update): mock_update.return_value = UPDATED_WF_DB resp = self.app.put( '/v2/workflows?scope=public', UPDATED_WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) self.assertDictEqual({'workflows': [UPDATED_WF]}, resp.json) self.assertEqual("public", mock_update.call_args[0][1]['scope']) @mock.patch.object( db_api, "update_workflow_definition", MOCK_WF_WITH_INPUT ) def test_put_with_input(self): resp = self.app.put( '/v2/workflows', WF_DEFINITION_WITH_INPUT, headers={'Content-Type': 'text/plain'} ) self.maxDiff = None self.assertEqual(200, resp.status_int) self.assertDictEqual({'workflows': [WF_WITH_DEFAULT_INPUT]}, resp.json) @mock.patch.object( db_api, "update_workflow_definition", MOCK_NOT_FOUND ) def test_put_not_found(self): resp = self.app.put( '/v2/workflows', UPDATED_WF_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True, ) self.assertEqual(404, resp.status_int) def test_put_invalid(self): resp = self.app.put( '/v2/workflows', WF_DEF_INVALID_MODEL_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Invalid DSL", resp.body.decode()) def test_put_more_workflows_with_uuid(self): resp = self.app.put( '/v2/workflows/123e4567-e89b-12d3-a456-426655440000', WFS_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "More than one workflows are not supported for update", resp.body.decode() ) @mock.patch.object(db_api, "create_workflow_definition") def test_post(self, mock_mtd): mock_mtd.return_value = WF_DB resp = self.app.post( '/v2/workflows', WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(201, resp.status_int) self.assertDictEqual({'workflows': [WF]}, resp.json) self.assertEqual(1, mock_mtd.call_count) spec = mock_mtd.call_args[0][0]['spec'] self.assertIsNotNone(spec) self.assertEqual(WF_DB.name, spec['name']) @mock.patch.object(db_api, "create_workflow_definition") def test_post_public(self, mock_mtd): mock_mtd.return_value = WF_DB resp = self.app.post( '/v2/workflows?scope=public', WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(201, resp.status_int) self.assertEqual({"workflows": [WF]}, resp.json) self.assertEqual("public", mock_mtd.call_args[0][0]['scope']) @mock.patch.object(db_api, "create_action_definition") def test_post_wrong_scope(self, mock_mtd): mock_mtd.return_value = WF_DB resp = self.app.post( '/v2/workflows?scope=unique', WF_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Scope must be one of the following", resp.body.decode()) @mock.patch.object(db_api, "create_workflow_definition", MOCK_DUPLICATE) def test_post_dup(self): resp = self.app.post( '/v2/workflows', WF_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(409, resp.status_int) def test_post_invalid(self): resp = self.app.post( '/v2/workflows', WF_DEF_INVALID_MODEL_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Invalid DSL", resp.body.decode()) @mock.patch.object(db_api, "delete_workflow_definition", MOCK_DELETE) @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) def test_delete(self): resp = self.app.delete('/v2/workflows/123') self.assertEqual(204, resp.status_int) @mock.patch( "mistral.db.v2.sqlalchemy.api.get_workflow_definition", return_value=WF_DB_SYSTEM ) def test_delete_system(self, get_mock): resp = self.app.delete('/v2/workflows/123', expect_errors=True) self.assertEqual(400, resp.status_int) self.assertIn( "Can not modify a system", resp.body.decode() ) @mock.patch.object(db_api, "delete_workflow_definition", MOCK_NOT_FOUND) def test_delete_not_found(self): resp = self.app.delete('/v2/workflows/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "get_workflow_definitions", MOCK_WFS) def test_get_all(self): resp = self.app.get('/v2/workflows') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['workflows'])) self.assertDictEqual(WF, resp.json['workflows'][0]) @mock.patch.object(db_api, 'get_workflow_definitions') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [WF_DB] # Successful run ] resp = self.app.get('/v2/workflows') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['workflows'])) self.assertDictEqual(WF, resp.json['workflows'][0]) @mock.patch.object(db_api, "get_workflow_definitions", MOCK_EMPTY) def test_get_all_empty(self): resp = self.app.get('/v2/workflows') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['workflows'])) @mock.patch('mistral.db.v2.api.get_workflow_definitions') @mock.patch('mistral.context.MistralContext.from_environ') def test_get_all_projects_admin(self, mock_context, mock_get_wf_defs): admin_ctx = unit_base.get_context(admin=True) mock_context.return_value = admin_ctx resp = self.app.get('/v2/workflows?all_projects=true') self.assertEqual(200, resp.status_int) self.assertTrue(mock_get_wf_defs.call_args[1].get('insecure', False)) def test_get_all_projects_normal_user(self): resp = self.app.get( '/v2/workflows?all_projects=true', expect_errors=True ) self.assertEqual(403, resp.status_int) @mock.patch.object(db_api, "get_workflow_definitions", MOCK_WFS) def test_get_all_pagination(self): resp = self.app.get( '/v2/workflows?limit=1&sort_keys=id,name') self.assertEqual(200, resp.status_int) self.assertIn('next', resp.json) self.assertEqual(1, len(resp.json['workflows'])) self.assertDictEqual(WF, resp.json['workflows'][0]) param_dict = utils.get_dict_from_string( resp.json['next'].split('?')[1], delimiter='&' ) expected_dict = { 'marker': '123e4567-e89b-12d3-a456-426655440000', 'limit': 1, 'sort_keys': 'id,name', 'sort_dirs': 'asc,asc', } self.assertDictEqual(expected_dict, param_dict) def test_get_all_pagination_limit_negative(self): resp = self.app.get( '/v2/workflows?limit=-1&sort_keys=id,name&sort_dirs=asc,asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Limit must be positive", resp.body.decode()) def test_get_all_pagination_limit_not_integer(self): resp = self.app.get( '/v2/workflows?limit=1.1&sort_keys=id,name&sort_dirs=asc,asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("unable to convert to int", resp.body.decode()) def test_get_all_pagination_invalid_sort_dirs_length(self): resp = self.app.get( '/v2/workflows?limit=1&sort_keys=id,name&sort_dirs=asc,asc,asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "Length of sort_keys must be equal or greater than sort_dirs", resp.body.decode() ) def test_get_all_pagination_unknown_direction(self): resp = self.app.get( '/v2/workflows?limit=1&sort_keys=id&sort_dirs=nonexist', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Unknown sort direction", resp.body.decode()) @mock.patch('mistral.db.v2.api.get_workflow_definitions') def test_get_all_with_fields_filter(self, mock_get_db_wfs): mock_get_db_wfs.return_value = [ ('123e4567-e89b-12d3-a456-426655440000', 'fake_name') ] resp = self.app.get('/v2/workflows?fields=name') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['workflows'])) expected_dict = { 'id': '123e4567-e89b-12d3-a456-426655440000', 'name': 'fake_name' } self.assertDictEqual(expected_dict, resp.json['workflows'][0]) def test_get_all_with_invalid_field(self): resp = self.app.get( '/v2/workflows?fields=name,nonexist', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "nonexist are invalid", resp.body.decode() ) def test_validate(self): resp = self.app.post( '/v2/workflows/validate', WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) self.assertTrue(resp.json['valid']) def test_validate_invalid_model_exception(self): resp = self.app.post( '/v2/workflows/validate', WF_DEF_INVALID_MODEL_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Invalid DSL", resp.json['error']) def test_validate_dsl_parse_exception(self): resp = self.app.post( '/v2/workflows/validate', WF_DEF_DSL_PARSE_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Definition could not be parsed", resp.json['error']) def test_validate_yaql_parse_exception(self): resp = self.app.post( '/v2/workflows/validate', WF_DEF_YAQL_PARSE_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("unexpected '*' at position 1", resp.json['error']) def test_validate_empty(self): resp = self.app.post( '/v2/workflows/validate', '', headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Invalid DSL", resp.json['error']) @mock.patch("mistral.services.workflows.update_workflows") @mock.patch.object(db_api, "create_workflow_definition") def test_workflow_within_namespace(self, mock_mtd, update_mock): mock_mtd.return_value = WF_DB_WITHIN_ABC_NAMESPACE namespace = 'abc' resp = self.app.post( '/v2/workflows?namespace=%s' % namespace, WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(201, resp.status_int) self.assertDictEqual({'workflows': [WF_WITH_NAMESPACE]}, resp.json) self.assertEqual(1, mock_mtd.call_count) spec = mock_mtd.call_args[0][0]['spec'] self.assertIsNotNone(spec) self.assertEqual(WF_DB.name, spec['name']) self.assertEqual(WF_DB_WITHIN_ABC_NAMESPACE.namespace, namespace) update_mock.return_value = [WF_DB_WITHIN_ABC_NAMESPACE] id_ = '234560fe-162a-4060-a16a-a0d9eee9b408' resp = self.app.put( '/v2/workflows/%s?namespace=%s' % (id_, namespace), WF_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) update_mock.assert_called_once_with( WF_DEFINITION, scope='private', identifier=id_, namespace='abc' ) self.assertDictEqual(WF_WITH_NAMESPACE, resp.json) @mock.patch.object(db_api, "get_workflow_definition") def test_workflow_within_project_id(self, mock_get): mock_get.return_value = WF_DB_PROJECT_ID resp = self.app.get( '/v2/workflows/123e4567-e89b-12d3-a456-426655440000') self.assertEqual(200, resp.status_int) self.assertTrue('project_id' in resp.json) mistral-6.0.0/mistral/tests/unit/api/v2/test_root.py0000666000175100017510000000454213245513261022515 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_serialization import jsonutils from mistral.tests.unit.api import base from mistral.tests.unit.api import test_auth class TestRootController(base.APITest): def test_index(self): resp = self.app.get('/', headers={'Accept': 'application/json'}) self.assertEqual(200, resp.status_int) data = jsonutils.loads(resp.body.decode()) data = data['versions'] self.assertEqual('v2.0', data[0]['id']) self.assertEqual('CURRENT', data[0]['status']) self.assertEqual( [{'href': 'http://localhost/v2', 'rel': 'self', 'target': 'v2'}], data[0]['links'] ) def test_v2_root(self): resp = self.app.get('/v2/', headers={'Accept': 'application/json'}) self.assertEqual(200, resp.status_int) data = jsonutils.loads(resp.body.decode()) self.assertEqual( 'http://localhost/v2', data['uri'] ) class TestRootControllerWithAuth(test_auth.TestKeystoneMiddleware): def test_index(self): resp = self.app.get('/', headers={'Accept': 'application/json'}) self.assertEqual(200, resp.status_int) data = jsonutils.loads(resp.body.decode()) data = data['versions'] self.assertEqual('v2.0', data[0]['id']) self.assertEqual('CURRENT', data[0]['status']) self.assertEqual( [{'href': 'http://localhost/v2', 'rel': 'self', 'target': 'v2'}], data[0]['links'] ) def test_v2_root(self): resp = self.app.get('/v2/', headers={'Accept': 'application/json'}) self.assertEqual(200, resp.status_int) data = jsonutils.loads(resp.body.decode()) self.assertEqual( 'http://localhost/v2', data['uri'] ) mistral-6.0.0/mistral/tests/unit/api/v2/test_environment.py0000666000175100017510000002357113245513261024101 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import json import mock import six import sqlalchemy as sa from mistral.api.controllers.v2 import resources from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models as db from mistral import exceptions as exc from mistral.tests.unit.api import base from oslo_utils import uuidutils DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S.%f' VARIABLES = { 'host': 'localhost', 'db': 'test', 'timeout': 600, 'verbose': True, '__actions': { 'std.sql': { 'conn': 'mysql://admin:secret@<% env().host %>/<% env().db %>' } } } ENVIRONMENT_FOR_CREATE = { 'name': 'test', 'description': 'my test settings', 'variables': VARIABLES, } ENVIRONMENT_FOR_UPDATE = { 'name': 'test', 'description': 'my test settings', 'variables': VARIABLES, 'scope': 'private' } ENVIRONMENT_FOR_UPDATE_NO_SCOPE = { 'name': 'test', 'description': 'my test settings', 'variables': VARIABLES } ENVIRONMENT = { 'id': uuidutils.generate_uuid(), 'name': 'test', 'description': 'my test settings', 'variables': VARIABLES, 'scope': 'private', 'project_id': '', 'created_at': str(datetime.datetime.utcnow()), 'updated_at': str(datetime.datetime.utcnow()) } ENVIRONMENT_WITH_ILLEGAL_FIELD = { 'id': uuidutils.generate_uuid(), 'name': 'test', 'description': 'my test settings', 'extra_field': 'I can add whatever I want here', 'variables': VARIABLES, 'scope': 'private', } ENVIRONMENT_DB = db.Environment( id=ENVIRONMENT['id'], name=ENVIRONMENT['name'], description=ENVIRONMENT['description'], variables=copy.deepcopy(VARIABLES), scope=ENVIRONMENT['scope'], project_id=ENVIRONMENT['project_id'], created_at=datetime.datetime.strptime(ENVIRONMENT['created_at'], DATETIME_FORMAT), updated_at=datetime.datetime.strptime(ENVIRONMENT['updated_at'], DATETIME_FORMAT) ) ENVIRONMENT_DB_WITH_PROJECT_ID = ENVIRONMENT_DB.get_clone() ENVIRONMENT_DB_WITH_PROJECT_ID.project_id = '' ENVIRONMENT_DB_DICT = {k: v for k, v in ENVIRONMENT_DB.items()} UPDATED_VARIABLES = copy.deepcopy(VARIABLES) UPDATED_VARIABLES['host'] = '127.0.0.1' FOR_UPDATED_ENVIRONMENT = copy.deepcopy(ENVIRONMENT_FOR_UPDATE) FOR_UPDATED_ENVIRONMENT['variables'] = json.dumps(UPDATED_VARIABLES) UPDATED_ENVIRONMENT = copy.deepcopy(ENVIRONMENT) UPDATED_ENVIRONMENT['variables'] = json.dumps(UPDATED_VARIABLES) UPDATED_ENVIRONMENT_DB = db.Environment(**ENVIRONMENT_DB_DICT) UPDATED_ENVIRONMENT_DB.variables = copy.deepcopy(UPDATED_VARIABLES) MOCK_ENVIRONMENT = mock.MagicMock(return_value=ENVIRONMENT_DB) MOCK_ENVIRONMENTS = mock.MagicMock(return_value=[ENVIRONMENT_DB]) MOCK_UPDATED_ENVIRONMENT = mock.MagicMock(return_value=UPDATED_ENVIRONMENT_DB) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_DUPLICATE = mock.MagicMock(side_effect=exc.DBDuplicateEntryError()) MOCK_DELETE = mock.MagicMock(return_value=None) def _convert_vars_to_dict(env_dict): """Converts 'variables' in the given environment dict into dictionary.""" if ('variables' in env_dict and isinstance(env_dict.get('variables'), six.string_types)): env_dict['variables'] = json.loads(env_dict['variables']) return env_dict def _convert_vars_to_json(env_dict): """Converts 'variables' in the given environment dict into string.""" if ('variables' in env_dict and isinstance(env_dict.get('variables'), dict)): env_dict['variables'] = json.dumps(env_dict['variables']) return env_dict class TestEnvironmentController(base.APITest): def _assert_dict_equal(self, expected, actual): self.assertIsInstance(expected, dict) self.assertIsInstance(actual, dict) _convert_vars_to_dict(expected) _convert_vars_to_dict(actual) self.assertDictEqual(expected, actual) def test_resource(self): resource = resources.Environment(**copy.deepcopy(ENVIRONMENT)) self._assert_dict_equal( copy.deepcopy(ENVIRONMENT), resource.to_dict() ) @mock.patch.object(db_api, 'get_environments', MOCK_ENVIRONMENTS) def test_get_all(self): resp = self.app.get('/v2/environments') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['environments'])) @mock.patch.object(db_api, 'get_environments') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [ENVIRONMENT_DB] # Successful run ] resp = self.app.get('/v2/environments') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['environments'])) self._assert_dict_equal(ENVIRONMENT, resp.json['environments'][0]) def test_get_all_empty(self): resp = self.app.get('/v2/environments') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['environments'])) @mock.patch.object(db_api, 'get_environment', MOCK_ENVIRONMENT) def test_get(self): resp = self.app.get('/v2/environments/123') self.assertEqual(200, resp.status_int) self._assert_dict_equal(ENVIRONMENT, resp.json) @mock.patch.object(db_api, 'get_environment') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), ENVIRONMENT_DB # Successful run ] resp = self.app.get('/v2/environments/123') self.assertEqual(200, resp.status_int) self._assert_dict_equal(ENVIRONMENT, resp.json) @mock.patch.object(db_api, 'get_environment', return_value=ENVIRONMENT_DB_WITH_PROJECT_ID) def test_get_within_project_id(self, mock_get): resp = self.app.get('/v2/environments/123') self.assertEqual(200, resp.status_int) self.assertEqual('', resp.json['project_id']) @mock.patch.object(db_api, "get_environment", MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get('/v2/environments/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, 'create_environment', MOCK_ENVIRONMENT) def test_post(self): resp = self.app.post_json( '/v2/environments', _convert_vars_to_json(copy.deepcopy(ENVIRONMENT_FOR_CREATE)) ) self.assertEqual(201, resp.status_int) self._assert_dict_equal(copy.deepcopy(ENVIRONMENT), resp.json) @mock.patch.object(db_api, 'create_environment', MOCK_ENVIRONMENT) def test_post_with_illegal_field(self): resp = self.app.post_json( '/v2/environments', _convert_vars_to_json( copy.deepcopy(ENVIRONMENT_WITH_ILLEGAL_FIELD)), expect_errors=True ) self.assertEqual(400, resp.status_int) @mock.patch.object(db_api, 'create_environment', MOCK_DUPLICATE) def test_post_dup(self): resp = self.app.post_json( '/v2/environments', _convert_vars_to_json(copy.deepcopy(ENVIRONMENT_FOR_CREATE)), expect_errors=True ) self.assertEqual(409, resp.status_int) @mock.patch.object(db_api, 'create_environment', MOCK_ENVIRONMENT) def test_post_default_scope(self): env = _convert_vars_to_json(copy.deepcopy(ENVIRONMENT_FOR_CREATE)) resp = self.app.post_json('/v2/environments', env) self.assertEqual(201, resp.status_int) self._assert_dict_equal(copy.deepcopy(ENVIRONMENT), resp.json) @mock.patch.object(db_api, 'update_environment', MOCK_UPDATED_ENVIRONMENT) def test_put(self): resp = self.app.put_json( '/v2/environments', copy.deepcopy(FOR_UPDATED_ENVIRONMENT) ) self.assertEqual(200, resp.status_int) self._assert_dict_equal(UPDATED_ENVIRONMENT, resp.json) @mock.patch.object(db_api, 'update_environment', MOCK_UPDATED_ENVIRONMENT) def test_put_default_scope(self): env = copy.deepcopy(ENVIRONMENT_FOR_UPDATE_NO_SCOPE) env['variables'] = json.dumps(env) resp = self.app.put_json('/v2/environments', env) self.assertEqual(200, resp.status_int) self._assert_dict_equal(copy.deepcopy(UPDATED_ENVIRONMENT), resp.json) @mock.patch.object(db_api, 'update_environment', MOCK_NOT_FOUND) def test_put_not_found(self): env = copy.deepcopy(FOR_UPDATED_ENVIRONMENT) resp = self.app.put_json( '/v2/environments', env, expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, 'delete_environment', MOCK_DELETE) def test_delete(self): resp = self.app.delete('/v2/environments/123') self.assertEqual(204, resp.status_int) @mock.patch.object(db_api, 'delete_environment', MOCK_NOT_FOUND) def test_delete_not_found(self): resp = self.app.delete('/v2/environments/123', expect_errors=True) self.assertEqual(404, resp.status_int) mistral-6.0.0/mistral/tests/unit/api/v2/test_event_trigger.py0000666000175100017510000002245013245513261024374 0ustar zuulzuul00000000000000# Copyright 2016 Catalyst IT Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import json import mock import sqlalchemy as sa from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.services import triggers from mistral.tests.unit.api import base from mistral.tests.unit import base as unit_base WF = models.WorkflowDefinition( spec={ 'version': '2.0', 'name': 'my_wf', 'tasks': { 'task1': { 'action': 'std.noop' } } } ) WF.update({'id': '123e4567-e89b-12d3-a456-426655440000', 'name': 'my_wf'}) TRIGGER = { 'id': '09cc56a9-d15e-4494-a6e2-c4ec8bdaacae', 'name': 'my_event_trigger', 'workflow_id': '123e4567-e89b-12d3-a456-426655440000', 'workflow_input': '{}', 'workflow_params': '{}', 'scope': 'private', 'exchange': 'openstack', 'topic': 'notification', 'event': 'compute.instance.create.start' } trigger_values = copy.deepcopy(TRIGGER) trigger_values['workflow_input'] = json.loads( trigger_values['workflow_input']) trigger_values['workflow_params'] = json.loads( trigger_values['workflow_params']) TRIGGER_DB = models.EventTrigger() TRIGGER_DB.update(trigger_values) MOCK_WF = mock.MagicMock(return_value=WF) MOCK_TRIGGER = mock.MagicMock(return_value=TRIGGER_DB) MOCK_TRIGGERS = mock.MagicMock(return_value=[TRIGGER_DB]) MOCK_NONE = mock.MagicMock(return_value=None) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) class TestEventTriggerController(base.APITest): @mock.patch.object(db_api, "get_event_trigger", MOCK_TRIGGER) def test_get(self): resp = self.app.get( '/v2/event_triggers/09cc56a9-d15e-4494-a6e2-c4ec8bdaacae' ) self.assertEqual(200, resp.status_int) self.assertDictEqual(TRIGGER, resp.json) @mock.patch.object(db_api, 'get_event_trigger') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), TRIGGER_DB # Successful run ] resp = self.app.get( '/v2/event_triggers/09cc56a9-d15e-4494-a6e2-c4ec8bdaacae' ) self.assertEqual(200, resp.status_int) self.assertDictEqual(TRIGGER, resp.json) @mock.patch.object(db_api, "get_event_trigger", MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get( '/v2/event_triggers/09cc56a9-d15e-4494-a6e2-c4ec8bdaacae', expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "get_workflow_definition_by_id", MOCK_WF) @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) @mock.patch.object(db_api, "create_event_trigger", MOCK_TRIGGER) @mock.patch.object(db_api, "get_event_triggers", MOCK_TRIGGERS) @mock.patch('mistral.rpc.clients.get_event_engine_client') def test_post(self, mock_rpc_client): client = mock.Mock() mock_rpc_client.return_value = client CREATE_TRIGGER = copy.deepcopy(TRIGGER) CREATE_TRIGGER.pop('id') resp = self.app.post_json('/v2/event_triggers', CREATE_TRIGGER) self.assertEqual(201, resp.status_int) self.assertEqual(1, client.create_event_trigger.call_count) trigger_db = TRIGGER_DB.to_dict() trigger_db['workflow_namespace'] = None self.assertDictEqual( trigger_db, client.create_event_trigger.call_args[0][0] ) self.assertListEqual( ['compute.instance.create.start'], client.create_event_trigger.call_args[0][1] ) @mock.patch.object(db_api, "get_workflow_definition_by_id", MOCK_WF) @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) @mock.patch.object(triggers, "create_event_trigger") def test_post_public(self, create_trigger): self.ctx = unit_base.get_context(default=False, admin=True) self.mock_ctx.return_value = self.ctx trigger = copy.deepcopy(TRIGGER) trigger['scope'] = 'public' trigger.pop('id') resp = self.app.post_json('/v2/event_triggers', trigger) self.assertEqual(201, resp.status_int) self.assertTrue(create_trigger.called) self.assertEqual('public', create_trigger.call_args[0][5]) def test_post_no_workflow_id(self): CREATE_TRIGGER = copy.deepcopy(TRIGGER) CREATE_TRIGGER.pop('id') CREATE_TRIGGER.pop('workflow_id') resp = self.app.post_json( '/v2/event_triggers', CREATE_TRIGGER, expect_errors=True ) self.assertEqual(400, resp.status_int) @mock.patch.object(db_api, "get_workflow_definition_by_id", MOCK_NOT_FOUND) def test_post_workflow_not_found(self): CREATE_TRIGGER = copy.deepcopy(TRIGGER) CREATE_TRIGGER.pop('id') resp = self.app.post_json( '/v2/event_triggers', CREATE_TRIGGER, expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, 'get_event_trigger', MOCK_NONE) @mock.patch('mistral.rpc.clients.get_event_engine_client') @mock.patch('mistral.db.v2.api.update_event_trigger') def test_put(self, mock_update, mock_rpc_client): client = mock.Mock() mock_rpc_client.return_value = client UPDATED_TRIGGER = models.EventTrigger() UPDATED_TRIGGER.update(trigger_values) UPDATED_TRIGGER.update({'name': 'new_name'}) mock_update.return_value = UPDATED_TRIGGER resp = self.app.put_json( '/v2/event_triggers/09cc56a9-d15e-4494-a6e2-c4ec8bdaacae', {'name': 'new_name'} ) self.assertEqual(200, resp.status_int) self.assertEqual(1, client.update_event_trigger.call_count) self.assertDictEqual( UPDATED_TRIGGER.to_dict(), client.update_event_trigger.call_args[0][0] ) def test_put_field_not_allowed(self): resp = self.app.put_json( '/v2/event_triggers/09cc56a9-d15e-4494-a6e2-c4ec8bdaacae', {'exchange': 'new_exchange'}, expect_errors=True ) self.assertEqual(400, resp.status_int) @mock.patch('mistral.rpc.clients.get_event_engine_client') @mock.patch.object(db_api, "get_event_trigger", MOCK_TRIGGER) @mock.patch.object(db_api, "get_event_triggers", mock.MagicMock(return_value=[])) @mock.patch.object(db_api, "delete_event_trigger", MOCK_NONE) def test_delete(self, mock_rpc_client): client = mock.Mock() mock_rpc_client.return_value = client resp = self.app.delete( '/v2/event_triggers/09cc56a9-d15e-4494-a6e2-c4ec8bdaacae' ) self.assertEqual(204, resp.status_int) self.assertEqual(1, client.delete_event_trigger.call_count) self.assertDictEqual( TRIGGER_DB.to_dict(), client.delete_event_trigger.call_args[0][0] ) self.assertListEqual( [], client.delete_event_trigger.call_args[0][1] ) @mock.patch.object(db_api, "get_event_trigger", MOCK_NOT_FOUND) def test_delete_not_found(self): resp = self.app.delete( '/v2/event_triggers/09cc56a9-d15e-4494-a6e2-c4ec8bdaacae', expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "get_event_triggers", MOCK_TRIGGERS) def test_get_all(self): resp = self.app.get('/v2/event_triggers') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['event_triggers'])) self.assertDictEqual(TRIGGER, resp.json['event_triggers'][0]) @mock.patch.object(db_api, 'get_event_triggers') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [TRIGGER_DB] # Successful run ] resp = self.app.get('/v2/event_triggers') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['event_triggers'])) self.assertDictEqual(TRIGGER, resp.json['event_triggers'][0]) @mock.patch('mistral.db.v2.api.get_event_triggers') @mock.patch('mistral.context.MistralContext.from_environ') def test_get_all_projects_admin(self, mock_context, mock_get_wf_defs): admin_ctx = unit_base.get_context(admin=True) mock_context.return_value = admin_ctx resp = self.app.get('/v2/event_triggers?all_projects=true') self.assertEqual(200, resp.status_int) self.assertTrue(mock_get_wf_defs.call_args[1].get('insecure', False)) mistral-6.0.0/mistral/tests/unit/api/v2/test_action_executions.py0000666000175100017510000005237413245513261025263 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import json import mock from oslo_config import cfg import oslo_messaging import sqlalchemy as sa from mistral.api.controllers.v2 import action_execution from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.rpc import base as rpc_base from mistral.rpc import clients as rpc_clients from mistral.tests.unit.api import base from mistral.utils import rest_utils from mistral.workflow import states from mistral_lib import actions as ml_actions # This line is needed for correct initialization of messaging config. oslo_messaging.get_rpc_transport(cfg.CONF) ACTION_EX_DB = models.ActionExecution( id='123', workflow_name='flow', task_execution=models.TaskExecution(name='task1'), task_execution_id='333', state=states.SUCCESS, state_info=states.SUCCESS, tags=['foo', 'fee'], name='std.echo', description='something', accepted=True, input={}, output={}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) AD_HOC_ACTION_EX_DB = models.ActionExecution( id='123', state=states.SUCCESS, state_info=states.SUCCESS, tags=['foo', 'fee'], name='std.echo', description='something', accepted=True, input={}, output={}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) AD_HOC_ACTION_EX_ERROR = models.ActionExecution( id='123', state=states.ERROR, state_info=states.ERROR, tags=['foo', 'fee'], name='std.echo', description='something', accepted=True, input={}, output={}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) AD_HOC_ACTION_EX_CANCELLED = models.ActionExecution( id='123', state=states.CANCELLED, state_info=states.CANCELLED, tags=['foo', 'fee'], name='std.echo', description='something', accepted=True, input={}, output={}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) ACTION_EX_DB_NOT_COMPLETE = models.ActionExecution( id='123', state=states.RUNNING, state_info=states.RUNNING, tags=['foo', 'fee'], name='std.echo', description='something', accepted=False, input={}, output={}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) ACTION_EX = { 'id': '123', 'workflow_name': 'flow', 'task_execution_id': '333', 'task_name': 'task1', 'state': 'SUCCESS', 'state_info': 'SUCCESS', 'tags': ['foo', 'fee'], 'name': 'std.echo', 'description': 'something', 'accepted': True, 'input': '{}', 'output': '{}', 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00' } UPDATED_ACTION_EX_DB = copy.copy(ACTION_EX_DB).to_dict() UPDATED_ACTION_EX_DB['state'] = 'SUCCESS' UPDATED_ACTION_EX_DB['task_name'] = 'task1' UPDATED_ACTION = copy.deepcopy(ACTION_EX) UPDATED_ACTION['state'] = 'SUCCESS' UPDATED_ACTION_OUTPUT = UPDATED_ACTION['output'] CANCELLED_ACTION_EX_DB = copy.copy(ACTION_EX_DB).to_dict() CANCELLED_ACTION_EX_DB['state'] = 'CANCELLED' CANCELLED_ACTION_EX_DB['task_name'] = 'task1' CANCELLED_ACTION = copy.deepcopy(ACTION_EX) CANCELLED_ACTION['state'] = 'CANCELLED' PAUSED_ACTION_EX_DB = copy.copy(ACTION_EX_DB).to_dict() PAUSED_ACTION_EX_DB['state'] = 'PAUSED' PAUSED_ACTION_EX_DB['task_name'] = 'task1' PAUSED_ACTION = copy.deepcopy(ACTION_EX) PAUSED_ACTION['state'] = 'PAUSED' RUNNING_ACTION_EX_DB = copy.copy(ACTION_EX_DB).to_dict() RUNNING_ACTION_EX_DB['state'] = 'RUNNING' RUNNING_ACTION_EX_DB['task_name'] = 'task1' RUNNING_ACTION = copy.deepcopy(ACTION_EX) RUNNING_ACTION['state'] = 'RUNNING' ERROR_ACTION_EX = copy.copy(ACTION_EX_DB).to_dict() ERROR_ACTION_EX['state'] = 'ERROR' ERROR_ACTION_EX['task_name'] = 'task1' ERROR_ACTION = copy.deepcopy(ACTION_EX) ERROR_ACTION['state'] = 'ERROR' ERROR_ACTION_RES = ERROR_ACTION['output'] ERROR_OUTPUT = "Fake error, it is a test" ERROR_ACTION_EX_WITH_OUTPUT = copy.copy(ACTION_EX_DB).to_dict() ERROR_ACTION_EX_WITH_OUTPUT['state'] = 'ERROR' ERROR_ACTION_EX_WITH_OUTPUT['task_name'] = 'task1' ERROR_ACTION_EX_WITH_OUTPUT['output'] = {"output": ERROR_OUTPUT} ERROR_ACTION_WITH_OUTPUT = copy.deepcopy(ACTION_EX) ERROR_ACTION_WITH_OUTPUT['state'] = 'ERROR' ERROR_ACTION_WITH_OUTPUT['output'] = ( '{"output": "%s"}' % ERROR_OUTPUT ) ERROR_ACTION_RES_WITH_OUTPUT = {"output": ERROR_OUTPUT} DEFAULT_ERROR_OUTPUT = "Unknown error" ERROR_ACTION_EX_FOR_EMPTY_OUTPUT = copy.copy(ACTION_EX_DB).to_dict() ERROR_ACTION_EX_FOR_EMPTY_OUTPUT['state'] = 'ERROR' ERROR_ACTION_EX_FOR_EMPTY_OUTPUT['task_name'] = 'task1' ERROR_ACTION_EX_FOR_EMPTY_OUTPUT['output'] = {"output": DEFAULT_ERROR_OUTPUT} ERROR_ACTION_FOR_EMPTY_OUTPUT = copy.deepcopy(ERROR_ACTION) ERROR_ACTION_FOR_EMPTY_OUTPUT['output'] = ( '{"output": "%s"}' % DEFAULT_ERROR_OUTPUT ) ERROR_ACTION_WITH_NONE_OUTPUT = copy.deepcopy(ERROR_ACTION) ERROR_ACTION_WITH_NONE_OUTPUT['output'] = None BROKEN_ACTION = copy.deepcopy(ACTION_EX) BROKEN_ACTION['output'] = 'string not escaped' MOCK_ACTION = mock.MagicMock(return_value=ACTION_EX_DB) MOCK_ACTION_NOT_COMPLETE = mock.MagicMock( return_value=ACTION_EX_DB_NOT_COMPLETE ) MOCK_ACTION_COMPLETE_ERROR = mock.MagicMock( return_value=AD_HOC_ACTION_EX_ERROR ) MOCK_ACTION_COMPLETE_CANCELLED = mock.MagicMock( return_value=AD_HOC_ACTION_EX_CANCELLED ) MOCK_AD_HOC_ACTION = mock.MagicMock(return_value=AD_HOC_ACTION_EX_DB) MOCK_ACTIONS = mock.MagicMock(return_value=[ACTION_EX_DB]) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_DELETE = mock.MagicMock(return_value=None) ACTION_EX_DB_WITH_PROJECT_ID = AD_HOC_ACTION_EX_DB.get_clone() ACTION_EX_DB_WITH_PROJECT_ID.project_id = '' @mock.patch.object(rpc_base, '_IMPL_CLIENT', mock.Mock()) class TestActionExecutionsController(base.APITest): def setUp(self): super(TestActionExecutionsController, self).setUp() self.addCleanup( cfg.CONF.set_default, 'allow_action_execution_deletion', False, group='api' ) @mock.patch.object(db_api, 'get_action_execution', MOCK_ACTION) def test_get(self): resp = self.app.get('/v2/action_executions/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(ACTION_EX, resp.json) @mock.patch.object(db_api, 'get_action_execution') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), ACTION_EX_DB # Successful run ] resp = self.app.get('/v2/action_executions/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(ACTION_EX, resp.json) def test_basic_get(self): resp = self.app.get('/v2/action_executions/') self.assertEqual(200, resp.status_int) @mock.patch.object(db_api, 'get_action_execution', MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get('/v2/action_executions/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, 'get_action_execution', return_value=ACTION_EX_DB_WITH_PROJECT_ID) def test_get_within_project_id(self, mock_get): resp = self.app.get('/v2/action_executions/123') self.assertEqual(200, resp.status_int) self.assertTrue('project_id' in resp.json) @mock.patch.object(rpc_clients.EngineClient, 'start_action') def test_post(self, f): f.return_value = ACTION_EX_DB.to_dict() resp = self.app.post_json( '/v2/action_executions', { 'name': 'std.echo', 'input': "{}", 'params': '{"save_result": true, "run_sync": true}' } ) self.assertEqual(201, resp.status_int) action_exec = copy.deepcopy(ACTION_EX) del action_exec['task_name'] self.assertDictEqual(action_exec, resp.json) f.assert_called_once_with( action_exec['name'], json.loads(action_exec['input']), description=None, save_result=True, run_sync=True ) @mock.patch.object(rpc_clients.EngineClient, 'start_action') def test_post_with_timeout(self, f): f.return_value = ACTION_EX_DB.to_dict() resp = self.app.post_json( '/v2/action_executions', { 'name': 'std.echo', 'input': "{}", 'params': '{"timeout": 2}' } ) self.assertEqual(201, resp.status_int) action_exec = copy.deepcopy(ACTION_EX) del action_exec['task_name'] self.assertDictEqual(action_exec, resp.json) f.assert_called_once_with( action_exec['name'], json.loads(action_exec['input']), description=None, timeout=2 ) @mock.patch.object(rpc_clients.EngineClient, 'start_action') def test_post_json(self, f): f.return_value = ACTION_EX_DB.to_dict() resp = self.app.post_json( '/v2/action_executions', { 'name': 'std.echo', 'input': {}, 'params': '{"save_result": true}' } ) self.assertEqual(201, resp.status_int) action_exec = copy.deepcopy(ACTION_EX) del action_exec['task_name'] self.assertDictEqual(action_exec, resp.json) f.assert_called_once_with( action_exec['name'], json.loads(action_exec['input']), description=None, save_result=True ) @mock.patch.object(rpc_clients.EngineClient, 'start_action') def test_post_without_input(self, f): f.return_value = ACTION_EX_DB.to_dict() f.return_value['output'] = {'result': '123'} resp = self.app.post_json( '/v2/action_executions', {'name': 'nova.servers_list'} ) self.assertEqual(201, resp.status_int) self.assertEqual('{"result": "123"}', resp.json['output']) f.assert_called_once_with('nova.servers_list', {}, description=None) def test_post_bad_result(self): resp = self.app.post_json( '/v2/action_executions', {'input': 'null'}, expect_errors=True ) self.assertEqual(400, resp.status_int) def test_post_bad_input(self): resp = self.app.post_json( '/v2/action_executions', {'input': None}, expect_errors=True ) self.assertEqual(400, resp.status_int) def test_post_bad_json_input(self): resp = self.app.post_json( '/v2/action_executions', {'input': 2}, expect_errors=True ) self.assertEqual(400, resp.status_int) @mock.patch.object(rpc_clients.EngineClient, 'on_action_complete') def test_put(self, f): f.return_value = UPDATED_ACTION_EX_DB resp = self.app.put_json('/v2/action_executions/123', UPDATED_ACTION) self.assertEqual(200, resp.status_int) self.assertDictEqual(UPDATED_ACTION, resp.json) f.assert_called_once_with( UPDATED_ACTION['id'], ml_actions.Result(data=ACTION_EX_DB.output) ) @mock.patch.object(rpc_clients.EngineClient, 'on_action_complete') def test_put_error_with_output(self, f): f.return_value = ERROR_ACTION_EX_WITH_OUTPUT resp = self.app.put_json( '/v2/action_executions/123', ERROR_ACTION_WITH_OUTPUT ) self.assertEqual(200, resp.status_int) self.assertDictEqual(ERROR_ACTION_WITH_OUTPUT, resp.json) f.assert_called_once_with( ERROR_ACTION_WITH_OUTPUT['id'], ml_actions.Result(error=ERROR_ACTION_RES_WITH_OUTPUT) ) @mock.patch.object(rpc_clients.EngineClient, 'on_action_complete') def test_put_error_with_unknown_reason(self, f): f.return_value = ERROR_ACTION_EX_FOR_EMPTY_OUTPUT resp = self.app.put_json('/v2/action_executions/123', ERROR_ACTION) self.assertEqual(200, resp.status_int) self.assertDictEqual(ERROR_ACTION_FOR_EMPTY_OUTPUT, resp.json) f.assert_called_once_with( ERROR_ACTION_FOR_EMPTY_OUTPUT['id'], ml_actions.Result(error=DEFAULT_ERROR_OUTPUT) ) @mock.patch.object(rpc_clients.EngineClient, 'on_action_complete') def test_put_error_with_unknown_reason_output_none(self, f): f.return_value = ERROR_ACTION_EX_FOR_EMPTY_OUTPUT resp = self.app.put_json( '/v2/action_executions/123', ERROR_ACTION_WITH_NONE_OUTPUT ) self.assertEqual(200, resp.status_int) self.assertDictEqual(ERROR_ACTION_FOR_EMPTY_OUTPUT, resp.json) f.assert_called_once_with( ERROR_ACTION_FOR_EMPTY_OUTPUT['id'], ml_actions.Result(error=DEFAULT_ERROR_OUTPUT) ) @mock.patch.object(rpc_clients.EngineClient, 'on_action_complete') def test_put_cancelled(self, on_action_complete_mock_func): on_action_complete_mock_func.return_value = CANCELLED_ACTION_EX_DB resp = self.app.put_json('/v2/action_executions/123', CANCELLED_ACTION) self.assertEqual(200, resp.status_int) self.assertDictEqual(CANCELLED_ACTION, resp.json) on_action_complete_mock_func.assert_called_once_with( CANCELLED_ACTION['id'], ml_actions.Result(cancel=True) ) @mock.patch.object(rpc_clients.EngineClient, 'on_action_update') def test_put_paused(self, on_action_update_mock_func): on_action_update_mock_func.return_value = PAUSED_ACTION_EX_DB resp = self.app.put_json('/v2/action_executions/123', PAUSED_ACTION) self.assertEqual(200, resp.status_int) self.assertDictEqual(PAUSED_ACTION, resp.json) on_action_update_mock_func.assert_called_once_with( PAUSED_ACTION['id'], PAUSED_ACTION['state'] ) @mock.patch.object(rpc_clients.EngineClient, 'on_action_update') def test_put_resume(self, on_action_update_mock_func): on_action_update_mock_func.return_value = RUNNING_ACTION_EX_DB resp = self.app.put_json('/v2/action_executions/123', RUNNING_ACTION) self.assertEqual(200, resp.status_int) self.assertDictEqual(RUNNING_ACTION, resp.json) on_action_update_mock_func.assert_called_once_with( RUNNING_ACTION['id'], RUNNING_ACTION['state'] ) @mock.patch.object( rpc_clients.EngineClient, 'on_action_complete', MOCK_NOT_FOUND ) def test_put_no_action_ex(self): resp = self.app.put_json( '/v2/action_executions/123', UPDATED_ACTION, expect_errors=True ) self.assertEqual(404, resp.status_int) def test_put_bad_state(self): action = copy.deepcopy(ACTION_EX) action['state'] = 'DELAYED' resp = self.app.put_json( '/v2/action_executions/123', action, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn('Expected one of', resp.json['faultstring']) def test_put_bad_result(self): resp = self.app.put_json( '/v2/action_executions/123', BROKEN_ACTION, expect_errors=True ) self.assertEqual(400, resp.status_int) @mock.patch.object(rpc_clients.EngineClient, 'on_action_complete') def test_put_without_result(self, f): action_ex = copy.deepcopy(UPDATED_ACTION) del action_ex['output'] f.return_value = UPDATED_ACTION_EX_DB resp = self.app.put_json('/v2/action_executions/123', action_ex) self.assertEqual(200, resp.status_int) @mock.patch.object(db_api, 'get_action_executions', MOCK_ACTIONS) def test_get_all(self): resp = self.app.get('/v2/action_executions') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['action_executions'])) self.assertDictEqual(ACTION_EX, resp.json['action_executions'][0]) @mock.patch.object(db_api, 'get_action_executions') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [ACTION_EX_DB] # Successful run ] resp = self.app.get('/v2/action_executions') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['action_executions'])) self.assertDictEqual(ACTION_EX, resp.json['action_executions'][0]) @mock.patch.object(db_api, 'get_action_executions', MOCK_ACTIONS) @mock.patch.object(rest_utils, 'get_all') def test_get_all_with_and_without_output(self, mock_get_all): resp = self.app.get('/v2/action_executions') args, kwargs = mock_get_all.call_args resource_function = kwargs['resource_function'] self.assertEqual(200, resp.status_int) self.assertEqual( action_execution._get_action_execution_resource_for_list, resource_function ) resp = self.app.get('/v2/action_executions?include_output=true') args, kwargs = mock_get_all.call_args resource_function = kwargs['resource_function'] self.assertEqual(200, resp.status_int) self.assertEqual( action_execution._get_action_execution_resource, resource_function ) @mock.patch.object(db_api, 'get_action_executions', MOCK_EMPTY) def test_get_all_empty(self): resp = self.app.get('/v2/action_executions') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['action_executions'])) @mock.patch.object(db_api, 'get_action_execution', MOCK_AD_HOC_ACTION) @mock.patch.object(db_api, 'delete_action_execution', MOCK_DELETE) def test_delete(self): cfg.CONF.set_default('allow_action_execution_deletion', True, 'api') resp = self.app.delete('/v2/action_executions/123') self.assertEqual(204, resp.status_int) @mock.patch.object(db_api, 'get_action_execution', MOCK_NOT_FOUND) def test_delete_not_found(self): cfg.CONF.set_default('allow_action_execution_deletion', True, 'api') resp = self.app.delete('/v2/action_executions/123', expect_errors=True) self.assertEqual(404, resp.status_int) def test_delete_not_allowed(self): resp = self.app.delete('/v2/action_executions/123', expect_errors=True) self.assertEqual(403, resp.status_int) self.assertIn( "Action execution deletion is not allowed", resp.body.decode() ) @mock.patch.object(db_api, 'get_action_execution', MOCK_ACTION) def test_delete_action_execution_with_task(self): cfg.CONF.set_default('allow_action_execution_deletion', True, 'api') resp = self.app.delete('/v2/action_executions/123', expect_errors=True) self.assertEqual(403, resp.status_int) self.assertIn( "Only ad-hoc action execution can be deleted", resp.body.decode() ) @mock.patch.object( db_api, 'get_action_execution', MOCK_ACTION_NOT_COMPLETE ) def test_delete_action_execution_not_complete(self): cfg.CONF.set_default('allow_action_execution_deletion', True, 'api') resp = self.app.delete('/v2/action_executions/123', expect_errors=True) self.assertEqual(403, resp.status_int) self.assertIn( "Only completed action execution can be deleted", resp.body.decode() ) @mock.patch.object( db_api, 'get_action_execution', MOCK_ACTION_COMPLETE_ERROR ) @mock.patch.object(db_api, 'delete_action_execution', MOCK_DELETE) def test_delete_action_execution_complete_error(self): cfg.CONF.set_default('allow_action_execution_deletion', True, 'api') resp = self.app.delete('/v2/action_executions/123', expect_errors=True) self.assertEqual(204, resp.status_int) @mock.patch.object( db_api, 'get_action_execution', MOCK_ACTION_COMPLETE_CANCELLED ) @mock.patch.object(db_api, 'delete_action_execution', MOCK_DELETE) def test_delete_action_execution_complete_cancelled(self): cfg.CONF.set_default('allow_action_execution_deletion', True, 'api') resp = self.app.delete('/v2/action_executions/123', expect_errors=True) self.assertEqual(204, resp.status_int) mistral-6.0.0/mistral/tests/unit/api/v2/__init__.py0000666000175100017510000000000013245513261022213 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/api/v2/test_members.py0000666000175100017510000003004513245513261023161 0ustar zuulzuul00000000000000# Copyright 2016 - Catalyst IT Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_config import cfg from oslo_utils import uuidutils import sqlalchemy as sa from mistral.db.v2 import api as db_api from mistral.services import security from mistral.tests.unit.api import base GET_PROJECT_PATH = 'mistral.services.security.get_project_id' WF_DEFINITION = { 'name': 'test_wf', 'definition': 'empty', 'spec': {}, 'tags': ['mc'], 'scope': 'private', 'project_id': security.get_project_id(), 'trust_id': '1234' } WORKFLOW_MEMBER_PENDING = { 'member_id': '11-22-33', 'project_id': '', 'resource_type': 'workflow', 'status': 'pending' } WORKFLOW_MEMBER_ACCEPTED = {} MEMBER_URL = None class TestMembersController(base.APITest): def setUp(self): super(TestMembersController, self).setUp() self.override_config('auth_enable', True, group='pecan') wf = db_api.create_workflow_definition(WF_DEFINITION) global MEMBER_URL, WORKFLOW_MEMBER_ACCEPTED MEMBER_URL = '/v2/workflows/%s/members' % wf.id WORKFLOW_MEMBER_PENDING['resource_id'] = wf.id WORKFLOW_MEMBER_ACCEPTED = copy.deepcopy(WORKFLOW_MEMBER_PENDING) WORKFLOW_MEMBER_ACCEPTED['status'] = 'accepted' cfg.CONF.set_default('auth_enable', True, group='pecan') def test_membership_api_without_auth(self): self.override_config('auth_enable', False, group='pecan') resp = self.app.get(MEMBER_URL, expect_errors=True) self.assertEqual(400, resp.status_int) self.assertIn( "Resource sharing feature can only be supported with " "authentication enabled", resp.body.decode() ) @mock.patch('mistral.context.AuthHook.before') def test_create_resource_member(self, auth_mock): # Workflow owner shares workflow to another tenant. resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) self._assert_dict_contains_subset(WORKFLOW_MEMBER_PENDING, resp.json) @mock.patch('mistral.context.AuthHook.before') def test_create_membership_nonexistent_wf(self, auth_mock): nonexistent_wf_id = uuidutils.generate_uuid() resp = self.app.post_json( '/v2/workflows/%s/members' % nonexistent_wf_id, {'member_id': '11-22-33'}, expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch('mistral.context.AuthHook.before') def test_create_duplicate_membership(self, auth_mock): resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) resp = self.app.post_json( MEMBER_URL, {'member_id': '11-22-33'}, expect_errors=True ) self.assertEqual(409, resp.status_int) self.assertIn("Duplicate entry for ResourceMember", resp.body.decode()) @mock.patch('mistral.context.AuthHook.before') def test_create_membership_public_wf(self, auth_mock): WF_DEFINITION_PUBLIC = copy.deepcopy(WF_DEFINITION) WF_DEFINITION_PUBLIC['name'] = 'test_wf1' WF_DEFINITION_PUBLIC['scope'] = 'public' wf_public = db_api.create_workflow_definition(WF_DEFINITION_PUBLIC) resp = self.app.post_json( '/v2/workflows/%s/members' % wf_public.id, {'member_id': '11-22-33'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "Only private resource could be shared", resp.body.decode() ) @mock.patch('mistral.context.AuthHook.before') def test_create_membership_untransferable(self, auth_mock): resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='11-22-33') with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.post_json( MEMBER_URL, {'member_id': 'other_tenant'}, expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch('mistral.context.AuthHook.before') def test_get_other_memberships(self, auth_mock): resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='other_tenant') with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.get(MEMBER_URL) self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['members'])) @mock.patch('mistral.context.AuthHook.before') @mock.patch.object(db_api, 'get_resource_member') def test_get_operational_error(self, mocked_get, auth_mock): member_data = {'member_id': '11-22-33'} mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), member_data # Successful run ] resp = self.app.post_json(MEMBER_URL, member_data) self.assertEqual(201, resp.status_int) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='other_tenant') with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.get(MEMBER_URL) self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['members'])) @mock.patch('mistral.context.AuthHook.before') def test_get_memberships_nonexistent_wf(self, auth_mock): nonexistent_wf_id = uuidutils.generate_uuid() resp = self.app.get('/v2/workflows/%s/members' % nonexistent_wf_id) self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['members'])) @mock.patch('mistral.context.AuthHook.before') def test_get_resource_memberips(self, auth_mock): # Workflow owner shares workflow to another tenant. resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) self._assert_dict_contains_subset(WORKFLOW_MEMBER_PENDING, resp.json) # Workflow owner queries the workflow members. resp = self.app.get(MEMBER_URL) self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['members'])) self._assert_dict_contains_subset( WORKFLOW_MEMBER_PENDING, resp.json['members'][0] ) # Workflow owner queries the exact workflow member. resp = self.app.get('%s/11-22-33' % MEMBER_URL) self.assertEqual(200, resp.status_int) self._assert_dict_contains_subset( WORKFLOW_MEMBER_PENDING, resp.json ) @mock.patch('mistral.context.AuthHook.before') def test_get_other_membership(self, auth_mock): resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='other_tenant') with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.get( '%s/11-22-33' % MEMBER_URL, expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch('mistral.context.AuthHook.before') def test_update_membership(self, auth_mock): # Workflow owner shares workflow to another tenant. resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='11-22-33') # Tenant accepts the workflow shared to him. with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.put_json( '%s/11-22-33' % MEMBER_URL, {'status': 'accepted'} ) self.assertEqual(200, resp.status_int) self._assert_dict_contains_subset( WORKFLOW_MEMBER_ACCEPTED, resp.json ) # Tenant queries exact member of workflow shared to him. # (status=accepted). with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.get('%s/11-22-33' % MEMBER_URL) self.assertEqual(200, resp.status_int) self._assert_dict_contains_subset( WORKFLOW_MEMBER_ACCEPTED, resp.json ) # Workflow owner queries the exact workflow member. # (status=accepted). resp = self.app.get('%s/11-22-33' % MEMBER_URL) self.assertEqual(200, resp.status_int) self._assert_dict_contains_subset( WORKFLOW_MEMBER_ACCEPTED, resp.json ) @mock.patch('mistral.context.AuthHook.before') def test_update_membership_invalid_status(self, auth_mock): resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='11-22-33') with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.put_json( '%s/11-22-33' % MEMBER_URL, {'status': 'invalid'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "Invalid input", resp.body.decode() ) @mock.patch('mistral.context.AuthHook.before') def test_update_membership_not_shared_user(self, auth_mock): resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) resp = self.app.put_json( '%s/11-22-33' % MEMBER_URL, {'status': 'accepted'}, expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch('mistral.context.AuthHook.before') def test_delete_membership(self, auth_mock): # Workflow owner shares workflow to another tenant. resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) # Workflow owner deletes the exact workflow member. resp = self.app.delete('%s/11-22-33' % MEMBER_URL) self.assertEqual(204, resp.status_int) # Workflow owner queries the workflow members. resp = self.app.get(MEMBER_URL) self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['members'])) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='11-22-33') # Tenant queries members of workflow shared to him, after deletion. with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.get(MEMBER_URL) self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['members'])) @mock.patch('mistral.context.AuthHook.before') def test_delete_membership_not_owner(self, auth_mock): resp = self.app.post_json(MEMBER_URL, {'member_id': '11-22-33'}) self.assertEqual(201, resp.status_int) # Using mock to switch to another tenant. get_mock = mock.MagicMock(return_value='11-22-33') with mock.patch(GET_PROJECT_PATH, get_mock): resp = self.app.delete( '%s/11-22-33' % MEMBER_URL, expect_errors=True ) self.assertEqual(404, resp.status_int) mistral-6.0.0/mistral/tests/unit/api/v2/test_keycloak_auth.py0000666000175100017510000002657413245513261024366 0ustar zuulzuul00000000000000# Copyright 2017 - Nokia Networks # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import mock import pecan import pecan.testing import requests import requests_mock import webob from mistral.api import app as pecan_app from mistral.auth import keycloak from mistral import context from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.services import periodic from mistral.tests.unit import base from mistral.tests.unit.mstrlfixtures import policy_fixtures WF_DEFINITION = """ --- version: '2.0' flow: type: direct input: - param1 tasks: task1: action: std.echo output="Hi" """ WF_DB = models.WorkflowDefinition( id='123e4567-e89b-12d3-a456-426655440000', name='flow', definition=WF_DEFINITION, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), spec={'input': ['param1']} ) WF = { 'id': '123e4567-e89b-12d3-a456-426655440000', 'name': 'flow', 'definition': WF_DEFINITION, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'input': 'param1' } MOCK_WF = mock.MagicMock(return_value=WF_DB) # Set up config options. AUTH_URL = 'https://my.keycloak.com:8443/auth' REALM_NAME = 'my_realm' USER_INFO_ENDPOINT = ( "%s/realms/%s/protocol/openid-connect/userinfo" % (AUTH_URL, REALM_NAME) ) USER_CLAIMS = { "sub": "248289761001", "name": "Jane Doe", "given_name": "Jane", "family_name": "Doe", "preferred_username": "j.doe", "email": "janedoe@example.com", "picture": "http://example.com/janedoe/me.jpg" } class TestKeyCloakOIDCAuth(base.BaseTest): def setUp(self): super(TestKeyCloakOIDCAuth, self).setUp() self.override_config('auth_url', AUTH_URL, group='keycloak_oidc') self.auth_handler = keycloak.KeycloakAuthHandler() def _build_request(self, token): req = webob.Request.blank("/") req.headers["x-auth-token"] = token req.get_response = lambda app: None return req @requests_mock.Mocker() def test_header_parsing(self, req_mock): token = { "iss": "http://localhost:8080/auth/realms/my_realm", "realm_access": { "roles": ["role1", "role2"] } } # Imitate successful response from KeyCloak with user claims. req_mock.get(USER_INFO_ENDPOINT, json=USER_CLAIMS) req = self._build_request(token) with mock.patch("jwt.decode", return_value=token): self.auth_handler.authenticate(req) self.assertEqual("Confirmed", req.headers["X-Identity-Status"]) self.assertEqual("my_realm", req.headers["X-Project-Id"]) self.assertEqual("role1,role2", req.headers["X-Roles"]) self.assertEqual(1, req_mock.call_count) def test_no_auth_token(self): req = webob.Request.blank("/") self.assertRaises( exc.UnauthorizedException, self.auth_handler.authenticate, req ) @requests_mock.Mocker() def test_no_realm_roles(self, req_mock): token = {"iss": "http://localhost:8080/auth/realms/my_realm"} # Imitate successful response from KeyCloak with user claims. req_mock.get(USER_INFO_ENDPOINT, json=USER_CLAIMS) req = self._build_request(token) with mock.patch("jwt.decode", return_value=token): self.auth_handler.authenticate(req) self.assertEqual("Confirmed", req.headers["X-Identity-Status"]) self.assertEqual("my_realm", req.headers["X-Project-Id"]) self.assertEqual("", req.headers["X-Roles"]) def test_wrong_token_format(self): req = self._build_request(token="WRONG_FORMAT_TOKEN") self.assertRaises( exc.UnauthorizedException, self.auth_handler.authenticate, req ) @requests_mock.Mocker() def test_server_unauthorized(self, req_mock): token = { "iss": "http://localhost:8080/auth/realms/my_realm", } # Imitate failure response from KeyCloak. req_mock.get( USER_INFO_ENDPOINT, status_code=401, reason='Access token is invalid' ) req = self._build_request(token) with mock.patch("jwt.decode", return_value=token): try: self.auth_handler.authenticate(req) except requests.exceptions.HTTPError as e: self.assertIn( "401 Client Error: Access token is invalid for url", str(e) ) else: raise Exception("Test is broken") @requests_mock.Mocker() def test_connection_error(self, req_mock): token = {"iss": "http://localhost:8080/auth/realms/my_realm"} req_mock.get(USER_INFO_ENDPOINT, exc=requests.ConnectionError) req = self._build_request(token) with mock.patch("jwt.decode", return_value=token): self.assertRaises( exc.MistralException, self.auth_handler.authenticate, req ) class TestKeyCloakOIDCAuthScenarios(base.DbTestCase): def setUp(self): super(TestKeyCloakOIDCAuthScenarios, self).setUp() self.override_config('enabled', False, group='cron_trigger') self.override_config('auth_enable', True, group='pecan') self.override_config('auth_type', 'keycloak-oidc') self.override_config('auth_url', AUTH_URL, group='keycloak_oidc') self.app = pecan.testing.load_test_app( dict(pecan_app.get_pecan_config()) ) # Adding cron trigger thread clean up explicitly in case if # new tests will provide an alternative configuration for pecan # application. self.addCleanup(periodic.stop_all_periodic_tasks) # Make sure the api get the correct context. self.patch_ctx = mock.patch( 'mistral.context.MistralContext.from_environ' ) self.mock_ctx = self.patch_ctx.start() self.mock_ctx.return_value = self.ctx self.addCleanup(self.patch_ctx.stop) self.policy = self.useFixture(policy_fixtures.PolicyFixture()) @requests_mock.Mocker() @mock.patch.object(db_api, 'get_workflow_definition', MOCK_WF) def test_get_workflow_success_auth(self, req_mock): # Imitate successful response from KeyCloak with user claims. req_mock.get(USER_INFO_ENDPOINT, json=USER_CLAIMS) token = { "iss": "http://localhost:8080/auth/realms/%s" % REALM_NAME, "realm_access": { "roles": ["role1", "role2"] } } headers = {'X-Auth-Token': str(token)} with mock.patch("jwt.decode", return_value=token): resp = self.app.get('/v2/workflows/123', headers=headers) self.assertEqual(200, resp.status_code) self.assertDictEqual(WF, resp.json) @mock.patch("requests.get") @mock.patch.object(db_api, 'get_workflow_definition', MOCK_WF) def test_get_workflow_invalid_token_format(self, req_mock): # Imitate successful response from KeyCloak with user claims. req_mock.get(USER_INFO_ENDPOINT, json=USER_CLAIMS) token = { "iss": "http://localhost:8080/auth/realms/%s" % REALM_NAME, "realm_access": { "roles": ["role1", "role2"] } } headers = {'X-Auth-Token': str(token)} resp = self.app.get( '/v2/workflows/123', headers=headers, expect_errors=True ) self.assertEqual(401, resp.status_code) self.assertEqual('401 Unauthorized', resp.status) self.assertIn('Failed to validate access token', resp.text) self.assertIn( "Token can't be decoded because of wrong format.", resp.text ) @requests_mock.Mocker() @mock.patch.object(db_api, 'get_workflow_definition', MOCK_WF) def test_get_workflow_failed_auth(self, req_mock): # Imitate failure response from KeyCloak. req_mock.get( USER_INFO_ENDPOINT, status_code=401, reason='Access token is invalid' ) token = { "iss": "http://localhost:8080/auth/realms/%s" % REALM_NAME, "realm_access": { "roles": ["role1", "role2"] } } headers = {'X-Auth-Token': str(token)} with mock.patch("jwt.decode", return_value=token): resp = self.app.get( '/v2/workflows/123', headers=headers, expect_errors=True ) self.assertEqual(401, resp.status_code) self.assertEqual('401 Unauthorized', resp.status) self.assertIn('Failed to validate access token', resp.text) self.assertIn('Access token is invalid', resp.text) class TestKeyCloakOIDCAuthApp(base.DbTestCase): """Test that Keycloak auth params were successfully passed to Context""" def setUp(self): super(TestKeyCloakOIDCAuthApp, self).setUp() self.override_config('enabled', False, group='cron_trigger') self.override_config('auth_enable', True, group='pecan') self.override_config('auth_type', 'keycloak-oidc') self.override_config('auth_url', AUTH_URL, group='keycloak_oidc') self.app = pecan.testing.load_test_app( dict(pecan_app.get_pecan_config()) ) # Adding cron trigger thread clean up explicitly in case if # new tests will provide an alternative configuration for pecan # application. self.addCleanup(periodic.stop_all_periodic_tasks) self.policy = self.useFixture(policy_fixtures.PolicyFixture()) @requests_mock.Mocker() @mock.patch.object(db_api, 'get_workflow_definition', MOCK_WF) def test_params_transition(self, req_mock): req_mock.get(USER_INFO_ENDPOINT, json=USER_CLAIMS) token = { "iss": "http://localhost:8080/auth/realms/%s" % REALM_NAME, "realm_access": { "roles": ["role1", "role2"] } } headers = { 'X-Auth-Token': str(token) } with mock.patch("jwt.decode", return_value=token): with mock.patch("mistral.context.set_ctx") as mocked_set_cxt: self.app.get('/v2/workflows/123', headers=headers) calls = mocked_set_cxt.call_args_list self.assertEqual(2, len(calls)) # First positional argument of the first call ('before') ctx = calls[0][0][0] self.assertIsInstance(ctx, context.MistralContext) self.assertEqual('my_realm', ctx.project_id) self.assertEqual(["role1", "role2"], ctx.roles) # Second call of set_ctx ('after'), where we reset the context self.assertIsNone(calls[1][0][0]) mistral-6.0.0/mistral/tests/unit/api/v2/test_executions.py0000666000175100017510000007261613245513272023731 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2015 Huawei Technologies Co., Ltd. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import json import mock from oslo_config import cfg import oslo_messaging from oslo_utils import uuidutils import sqlalchemy as sa from webtest import app as webtest_app from mistral.api.controllers.v2 import execution from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import api as sql_db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.rpc import base as rpc_base from mistral.rpc import clients as rpc_clients from mistral.tests.unit.api import base from mistral.tests.unit import base as unit_base from mistral import utils from mistral.utils import rest_utils from mistral.workflow import states # This line is needed for correct initialization of messaging config. oslo_messaging.get_rpc_transport(cfg.CONF) WF_EX = models.WorkflowExecution( id='123e4567-e89b-12d3-a456-426655440000', workflow_name='some', workflow_id='123e4567-e89b-12d3-a456-426655441111', description='execution description.', spec={'name': 'some'}, state=states.RUNNING, state_info=None, input={'foo': 'bar'}, output={}, params={'env': {'k1': 'abc'}}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) WF_EX_JSON = { 'id': '123e4567-e89b-12d3-a456-426655440000', 'input': '{"foo": "bar"}', 'output': '{}', 'params': '{"env": {"k1": "abc"}}', 'state': 'RUNNING', 'state_info': None, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'workflow_name': 'some', 'workflow_id': '123e4567-e89b-12d3-a456-426655441111' } SUB_WF_EX = models.WorkflowExecution( id=uuidutils.generate_uuid(), workflow_name='some', workflow_id='123e4567-e89b-12d3-a456-426655441111', description='foobar', spec={'name': 'some'}, state=states.RUNNING, state_info=None, input={'foo': 'bar'}, output={}, params={'env': {'k1': 'abc'}}, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), task_execution_id=uuidutils.generate_uuid() ) SUB_WF_EX_JSON = { 'id': SUB_WF_EX.id, 'workflow_name': 'some', 'workflow_id': '123e4567-e89b-12d3-a456-426655441111', 'input': '{"foo": "bar"}', 'output': '{}', 'params': '{"env": {"k1": "abc"}}', 'state': 'RUNNING', 'state_info': None, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'task_execution_id': SUB_WF_EX.task_execution_id } MOCK_SUB_WF_EXECUTIONS = mock.MagicMock(return_value=[SUB_WF_EX]) SUB_WF_EX_JSON_WITH_DESC = copy.deepcopy(SUB_WF_EX_JSON) SUB_WF_EX_JSON_WITH_DESC['description'] = SUB_WF_EX.description UPDATED_WF_EX = copy.deepcopy(WF_EX) UPDATED_WF_EX['state'] = states.PAUSED UPDATED_WF_EX_JSON = copy.deepcopy(WF_EX_JSON) UPDATED_WF_EX_JSON['state'] = states.PAUSED UPDATED_WF_EX_ENV = copy.deepcopy(UPDATED_WF_EX) UPDATED_WF_EX_ENV['params'] = {'env': {'k1': 'def'}} UPDATED_WF_EX_ENV_DESC = copy.deepcopy(UPDATED_WF_EX) UPDATED_WF_EX_ENV_DESC['description'] = 'foobar' UPDATED_WF_EX_ENV_DESC['params'] = {'env': {'k1': 'def'}} WF_EX_JSON_WITH_DESC = copy.deepcopy(WF_EX_JSON) WF_EX_JSON_WITH_DESC['description'] = WF_EX.description WF_EX_WITH_PROJECT_ID = WF_EX.get_clone() WF_EX_WITH_PROJECT_ID.project_id = '' SOURCE_WF_EX = copy.deepcopy(WF_EX) SOURCE_WF_EX['source_execution_id'] = WF_EX.id SOURCE_WF_EX['id'] = uuidutils.generate_uuid() SOURCE_WF_EX_JSON_WITH_DESC = copy.deepcopy(WF_EX_JSON_WITH_DESC) SOURCE_WF_EX_JSON_WITH_DESC['id'] = SOURCE_WF_EX.id SOURCE_WF_EX_JSON_WITH_DESC['source_execution_id'] = \ SOURCE_WF_EX.source_execution_id MOCK_WF_EX = mock.MagicMock(return_value=WF_EX) MOCK_SUB_WF_EX = mock.MagicMock(return_value=SUB_WF_EX) MOCK_SOURCE_WF_EX = mock.MagicMock(return_value=SOURCE_WF_EX) MOCK_WF_EXECUTIONS = mock.MagicMock(return_value=[WF_EX]) MOCK_UPDATED_WF_EX = mock.MagicMock(return_value=UPDATED_WF_EX) MOCK_DELETE = mock.MagicMock(return_value=None) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_ACTION_EXC = mock.MagicMock(side_effect=exc.ActionException()) @mock.patch.object(rpc_base, '_IMPL_CLIENT', mock.Mock()) class TestExecutionsController(base.APITest): @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) def test_get(self): resp = self.app.get('/v2/executions/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(WF_EX_JSON_WITH_DESC, resp.json) @mock.patch.object(db_api, 'get_workflow_execution') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), WF_EX # Successful run ] resp = self.app.get('/v2/executions/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(WF_EX_JSON_WITH_DESC, resp.json) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_SUB_WF_EX) def test_get_sub_wf_ex(self): resp = self.app.get('/v2/executions/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(SUB_WF_EX_JSON_WITH_DESC, resp.json) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get('/v2/executions/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, 'get_workflow_execution', return_value=WF_EX_WITH_PROJECT_ID) def test_get_within_project_id(self, mock_get): resp = self.app.get('/v2/executions/123', expect_errors=True) self.assertEqual(200, resp.status_int) self.assertTrue('project_id' in resp.json) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) @mock.patch.object( rpc_clients.EngineClient, 'pause_workflow', MOCK_UPDATED_WF_EX ) def test_put_state_paused(self): update_exec = { 'id': WF_EX['id'], 'state': states.PAUSED } resp = self.app.put_json('/v2/executions/123', update_exec) expected_exec = copy.deepcopy(WF_EX_JSON_WITH_DESC) expected_exec['state'] = states.PAUSED self.assertEqual(200, resp.status_int) self.assertDictEqual(expected_exec, resp.json) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) @mock.patch.object(rpc_clients.EngineClient, 'stop_workflow') def test_put_state_error(self, mock_stop_wf): update_exec = { 'id': WF_EX['id'], 'state': states.ERROR, 'state_info': 'Force' } wf_ex = copy.deepcopy(WF_EX) wf_ex['state'] = states.ERROR wf_ex['state_info'] = 'Force' mock_stop_wf.return_value = wf_ex resp = self.app.put_json('/v2/executions/123', update_exec) expected_exec = copy.deepcopy(WF_EX_JSON_WITH_DESC) expected_exec['state'] = states.ERROR expected_exec['state_info'] = 'Force' self.assertEqual(200, resp.status_int) self.assertDictEqual(expected_exec, resp.json) mock_stop_wf.assert_called_once_with('123', 'ERROR', 'Force') @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) @mock.patch.object(rpc_clients.EngineClient, 'stop_workflow') def test_put_state_cancelled(self, mock_stop_wf): update_exec = { 'id': WF_EX['id'], 'state': states.CANCELLED, 'state_info': 'Cancelled by user.' } wf_ex = copy.deepcopy(WF_EX) wf_ex['state'] = states.CANCELLED wf_ex['state_info'] = 'Cancelled by user.' mock_stop_wf.return_value = wf_ex resp = self.app.put_json('/v2/executions/123', update_exec) expected_exec = copy.deepcopy(WF_EX_JSON_WITH_DESC) expected_exec['state'] = states.CANCELLED expected_exec['state_info'] = 'Cancelled by user.' self.assertEqual(200, resp.status_int) self.assertDictEqual(expected_exec, resp.json) mock_stop_wf.assert_called_once_with( '123', 'CANCELLED', 'Cancelled by user.' ) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) @mock.patch.object(rpc_clients.EngineClient, 'resume_workflow') def test_put_state_resume(self, mock_resume_wf): update_exec = { 'id': WF_EX['id'], 'state': states.RUNNING } wf_ex = copy.deepcopy(WF_EX) wf_ex['state'] = states.RUNNING wf_ex['state_info'] = None mock_resume_wf.return_value = wf_ex resp = self.app.put_json('/v2/executions/123', update_exec) expected_exec = copy.deepcopy(WF_EX_JSON_WITH_DESC) expected_exec['state'] = states.RUNNING expected_exec['state_info'] = None self.assertEqual(200, resp.status_int) self.assertDictEqual(expected_exec, resp.json) mock_resume_wf.assert_called_once_with('123', env=None) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) def test_put_invalid_state(self): invalid_states = [states.IDLE, states.WAITING, states.RUNNING_DELAYED] for state in invalid_states: update_exec = { 'id': WF_EX['id'], 'state': state } resp = self.app.put_json( '/v2/executions/123', update_exec, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( 'Cannot change state to %s.' % state, resp.json['faultstring'] ) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) @mock.patch.object(rpc_clients.EngineClient, 'stop_workflow') def test_put_state_info_unset(self, mock_stop_wf): update_exec = { 'id': WF_EX['id'], 'state': states.ERROR, } wf_ex = copy.deepcopy(WF_EX) wf_ex['state'] = states.ERROR del wf_ex.state_info mock_stop_wf.return_value = wf_ex resp = self.app.put_json('/v2/executions/123', update_exec) expected_exec = copy.deepcopy(WF_EX_JSON_WITH_DESC) expected_exec['state'] = states.ERROR expected_exec['state_info'] = None self.assertEqual(200, resp.status_int) self.assertDictEqual(expected_exec, resp.json) mock_stop_wf.assert_called_once_with('123', 'ERROR', None) @mock.patch('mistral.db.v2.api.get_workflow_execution') @mock.patch( 'mistral.db.v2.api.update_workflow_execution', return_value=WF_EX ) def test_put_description(self, mock_update, mock_ensure): update_params = {'description': 'execution description.'} resp = self.app.put_json('/v2/executions/123', update_params) self.assertEqual(200, resp.status_int) mock_ensure.assert_called_once_with('123') mock_update.assert_called_once_with('123', update_params) @mock.patch.object( sql_db_api, 'get_workflow_execution', mock.MagicMock(return_value=copy.deepcopy(UPDATED_WF_EX)) ) @mock.patch( 'mistral.services.workflows.update_workflow_execution_env', return_value=copy.deepcopy(UPDATED_WF_EX_ENV) ) def test_put_env(self, mock_update_env): update_exec = {'params': '{"env": {"k1": "def"}}'} resp = self.app.put_json('/v2/executions/123', update_exec) self.assertEqual(200, resp.status_int) self.assertEqual(update_exec['params'], resp.json['params']) mock_update_env.assert_called_once_with(UPDATED_WF_EX, {'k1': 'def'}) @mock.patch.object(db_api, 'update_workflow_execution', MOCK_NOT_FOUND) def test_put_not_found(self): resp = self.app.put_json( '/v2/executions/123', dict(state=states.PAUSED), expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) def test_put_empty(self): resp = self.app.put_json('/v2/executions/123', {}, expect_errors=True) self.assertEqual(400, resp.status_int) self.assertIn( 'state, description, or env is not provided for update', resp.json['faultstring'] ) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) def test_put_state_and_description(self): resp = self.app.put_json( '/v2/executions/123', {'description': 'foobar', 'state': states.ERROR}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( 'description must be updated separately from state', resp.json['faultstring'] ) @mock.patch.object( sql_db_api, 'get_workflow_execution', mock.MagicMock(return_value=copy.deepcopy(UPDATED_WF_EX)) ) @mock.patch( 'mistral.db.v2.api.update_workflow_execution', return_value=WF_EX ) @mock.patch( 'mistral.services.workflows.update_workflow_execution_env', return_value=copy.deepcopy(UPDATED_WF_EX_ENV_DESC) ) def test_put_env_and_description(self, mock_update_env, mock_update): update_exec = { 'description': 'foobar', 'params': '{"env": {"k1": "def"}}' } resp = self.app.put_json('/v2/executions/123', update_exec) self.assertEqual(200, resp.status_int) self.assertEqual(update_exec['description'], resp.json['description']) self.assertEqual(update_exec['params'], resp.json['params']) mock_update.assert_called_once_with('123', {'description': 'foobar'}) mock_update_env.assert_called_once_with(UPDATED_WF_EX, {'k1': 'def'}) @mock.patch.object( db_api, 'get_workflow_execution', mock.MagicMock(return_value=None) ) def test_put_env_wrong_state(self): update_exec = { 'id': WF_EX['id'], 'state': states.SUCCESS, 'params': '{"env": {"k1": "def"}}' } resp = self.app.put_json( '/v2/executions/123', update_exec, expect_errors=True ) self.assertEqual(400, resp.status_int) expected_fault = ( 'env can only be updated when workflow execution ' 'is not running or on resume from pause' ) self.assertIn(expected_fault, resp.json['faultstring']) @mock.patch.object(rpc_clients.EngineClient, 'start_workflow') @mock.patch.object(db_api, 'load_workflow_execution') def test_post_auto_id(self, load_wf_ex_func, start_wf_func): # NOTE: In fact, we use "white box" testing here to understand # if the REST controller calls other APIs as expected. This is # the only way of testing available with the current testing # infrastructure. start_wf_func.return_value = WF_EX.to_dict() json_body = WF_EX_JSON_WITH_DESC.copy() # We don't want to pass execution ID in this case. del json_body['id'] expected_json = WF_EX_JSON_WITH_DESC resp = self.app.post_json('/v2/executions', json_body) self.assertEqual(201, resp.status_int) self.assertDictEqual(expected_json, resp.json) load_wf_ex_func.assert_not_called() start_wf_func.assert_called_once_with( expected_json['workflow_id'], '', None, json.loads(expected_json['input']), expected_json['description'], **json.loads(expected_json['params']) ) @mock.patch.object(rpc_clients.EngineClient, 'start_workflow') @mock.patch.object(db_api, 'load_workflow_execution') def test_post_with_exec_id_exec_doesnt_exist(self, load_wf_ex_func, start_wf_func): # NOTE: In fact, we use "white box" testing here to understand # if the REST controller calls other APIs as expected. This is # the only way of testing available with the current testing # infrastructure. # Imitate that the execution doesn't exist in DB. load_wf_ex_func.return_value = None start_wf_func.return_value = WF_EX.to_dict() # We want to pass execution ID in this case so we don't delete 'id' # from the dict. json_body = WF_EX_JSON_WITH_DESC.copy() expected_json = WF_EX_JSON_WITH_DESC resp = self.app.post_json('/v2/executions', json_body) self.assertEqual(201, resp.status_int) self.assertDictEqual(expected_json, resp.json) load_wf_ex_func.assert_called_once_with(expected_json['id']) start_wf_func.assert_called_once_with( expected_json['workflow_id'], '', expected_json['id'], json.loads(expected_json['input']), expected_json['description'], **json.loads(expected_json['params']) ) @mock.patch.object(rpc_clients.EngineClient, 'start_workflow') @mock.patch.object(db_api, 'load_workflow_execution') def test_post_with_exec_id_exec_exists(self, load_wf_ex_func, start_wf_func): # NOTE: In fact, we use "white box" testing here to understand # if the REST controller calls other APIs as expected. This is # the only way of testing available with the current testing # infrastructure. # Imitate that the execution exists in DB. load_wf_ex_func.return_value = WF_EX # We want to pass execution ID in this case so we don't delete 'id' # from the dict. json_body = WF_EX_JSON_WITH_DESC.copy() expected_json = WF_EX_JSON_WITH_DESC resp = self.app.post_json('/v2/executions', json_body) self.assertEqual(201, resp.status_int) self.assertDictEqual(expected_json, resp.json) load_wf_ex_func.assert_called_once_with(expected_json['id']) # Note that "start_workflow" method on engine API should not be called # in this case because we passed execution ID to the endpoint and the # corresponding object exists. start_wf_func.assert_not_called() @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(rpc_clients.EngineClient, 'start_workflow') def test_post_with_source_execution_id(self, wf_exec_mock): wf_exec_mock.return_value = SOURCE_WF_EX.to_dict() resp = self.app.post_json('/v2/executions/', SOURCE_WF_EX_JSON_WITH_DESC) source_wf_ex_json = copy.copy(SOURCE_WF_EX_JSON_WITH_DESC) del source_wf_ex_json['source_execution_id'] self.assertEqual(201, resp.status_int) self.assertDictEqual(source_wf_ex_json, resp.json) exec_dict = source_wf_ex_json wf_exec_mock.assert_called_once_with( exec_dict['workflow_id'], '', exec_dict['id'], json.loads(exec_dict['input']), exec_dict['description'], **json.loads(exec_dict['params']) ) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_WF_EX) @mock.patch.object(rpc_clients.EngineClient, 'start_workflow') def test_post_with_src_exec_id_without_exec_id(self, wf_exec_mock): source_wf_ex = copy.copy(SOURCE_WF_EX) source_wf_ex.id = "" source_wf_ex_json = copy.copy(SOURCE_WF_EX_JSON_WITH_DESC) source_wf_ex_json['id'] = '' wf_exec_mock.return_value = source_wf_ex.to_dict() resp = self.app.post_json('/v2/executions/', source_wf_ex_json) del source_wf_ex_json['source_execution_id'] self.assertEqual(201, resp.status_int) self.assertDictEqual(source_wf_ex_json, resp.json) exec_dict = source_wf_ex_json wf_exec_mock.assert_called_once_with( exec_dict['workflow_id'], '', '', json.loads(exec_dict['input']), exec_dict['description'], **json.loads(exec_dict['params']) ) @mock.patch.object(db_api, 'get_workflow_execution', MOCK_EMPTY) @mock.patch.object(rpc_clients.EngineClient, 'start_workflow') def test_post_without_source_execution_id(self, wf_exec_mock): wf_exec_mock.return_value = SOURCE_WF_EX.to_dict() source_wf_ex_json = copy.copy(SOURCE_WF_EX_JSON_WITH_DESC) source_wf_ex_json['source_execution_id'] = "" # here we want to pass an empty value into the api for the # source execution id to make sure that the correct actions are # taken. resp = self.app.post_json('/v2/executions/', source_wf_ex_json) self.assertEqual(201, resp.status_int) del source_wf_ex_json['source_execution_id'] # here we have to remove the source execution key as the # id is only used to perform a lookup. self.assertDictEqual(source_wf_ex_json, resp.json) exec_dict = source_wf_ex_json wf_exec_mock.assert_called_once_with( exec_dict['workflow_id'], '', exec_dict['id'], json.loads(exec_dict['input']), exec_dict['description'], **json.loads(exec_dict['params']) ) @mock.patch.object( rpc_clients.EngineClient, 'start_workflow', MOCK_ACTION_EXC ) def test_post_throws_exception(self): context = self.assertRaises( webtest_app.AppError, self.app.post_json, '/v2/executions', WF_EX_JSON ) self.assertIn('Bad response: 400', context.args[0]) def test_post_without_workflow_id_and_name(self): context = self.assertRaises( webtest_app.AppError, self.app.post_json, '/v2/executions', {'description': 'some description here.'} ) self.assertIn('Bad response: 400', context.args[0]) @mock.patch.object(db_api, 'delete_workflow_execution', MOCK_DELETE) def test_delete(self): resp = self.app.delete('/v2/executions/123') self.assertEqual(204, resp.status_int) @mock.patch.object(db_api, 'delete_workflow_execution', MOCK_NOT_FOUND) def test_delete_not_found(self): resp = self.app.delete('/v2/executions/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, 'get_workflow_executions', MOCK_WF_EXECUTIONS) def test_get_all(self): resp = self.app.get('/v2/executions') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['executions'])) self.assertDictEqual(WF_EX_JSON_WITH_DESC, resp.json['executions'][0]) @mock.patch.object(db_api, 'get_workflow_executions') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [WF_EX] # Successful run ] resp = self.app.get('/v2/executions') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['executions'])) self.assertDictEqual(WF_EX_JSON_WITH_DESC, resp.json['executions'][0]) @mock.patch.object(db_api, 'get_workflow_executions', MOCK_EMPTY) def test_get_all_empty(self): resp = self.app.get('/v2/executions') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['executions'])) @mock.patch.object(db_api, "get_workflow_executions", MOCK_WF_EXECUTIONS) def test_get_all_pagination(self): resp = self.app.get( '/v2/executions?limit=1&sort_keys=id,workflow_name' '&sort_dirs=asc,desc') self.assertEqual(200, resp.status_int) self.assertIn('next', resp.json) self.assertEqual(1, len(resp.json['executions'])) self.assertDictEqual(WF_EX_JSON_WITH_DESC, resp.json['executions'][0]) param_dict = utils.get_dict_from_string( resp.json['next'].split('?')[1], delimiter='&' ) expected_dict = { 'marker': '123e4567-e89b-12d3-a456-426655440000', 'limit': 1, 'sort_keys': 'id,workflow_name', 'sort_dirs': 'asc,desc' } self.assertDictEqual(expected_dict, param_dict) def test_get_all_pagination_limit_negative(self): resp = self.app.get( '/v2/executions?limit=-1&sort_keys=id&sort_dirs=asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Limit must be positive", resp.body.decode()) def test_get_all_pagination_limit_not_integer(self): resp = self.app.get( '/v2/executions?limit=1.1&sort_keys=id&sort_dirs=asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("unable to convert to int", resp.body.decode()) def test_get_all_pagination_invalid_sort_dirs_length(self): resp = self.app.get( '/v2/executions?limit=1&sort_keys=id&sort_dirs=asc,asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "Length of sort_keys must be equal or greater than sort_dirs", resp.body.decode() ) def test_get_all_pagination_unknown_direction(self): resp = self.app.get( '/v2/actions?limit=1&sort_keys=id&sort_dirs=nonexist', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Unknown sort direction", resp.body.decode()) @mock.patch.object( db_api, 'get_workflow_executions', MOCK_SUB_WF_EXECUTIONS ) def test_get_task_workflow_executions(self): resp = self.app.get( '/v2/tasks/%s/workflow_executions' % SUB_WF_EX.task_execution_id ) self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['executions'])) self.assertDictEqual( SUB_WF_EX_JSON_WITH_DESC, resp.json['executions'][0] ) @mock.patch.object(db_api, 'get_workflow_executions', MOCK_WF_EXECUTIONS) @mock.patch.object(rest_utils, 'get_all') def test_get_all_executions_with_output(self, mock_get_all): resp = self.app.get('/v2/executions?include_output=true') self.assertEqual(200, resp.status_int) args, kwargs = mock_get_all.call_args resource_function = kwargs['resource_function'] self.assertEqual( execution._get_workflow_execution_resource, resource_function ) @mock.patch.object(db_api, 'get_workflow_executions', MOCK_WF_EXECUTIONS) @mock.patch.object(rest_utils, 'get_all') def test_get_all_executions_without_output(self, mock_get_all): resp = self.app.get('/v2/executions') self.assertEqual(200, resp.status_int) args, kwargs = mock_get_all.call_args resource_function = kwargs['resource_function'] self.assertIsNone(resource_function) @mock.patch('mistral.db.v2.api.get_workflow_executions') @mock.patch('mistral.context.MistralContext.from_environ') def test_get_all_projects_admin(self, mock_context, mock_get_execs): admin_ctx = unit_base.get_context(admin=True) mock_context.return_value = admin_ctx resp = self.app.get('/v2/executions?all_projects=true') self.assertEqual(200, resp.status_int) self.assertTrue(mock_get_execs.call_args[1].get('insecure', False)) def test_get_all_projects_normal_user(self): resp = self.app.get( '/v2/executions?all_projects=true', expect_errors=True ) self.assertEqual(403, resp.status_int) @mock.patch('mistral.db.v2.api.get_workflow_executions') @mock.patch('mistral.context.MistralContext.from_environ') def test_get_all_filter_by_project_id(self, mock_context, mock_get_execs): admin_ctx = unit_base.get_context(admin=True) mock_context.return_value = admin_ctx fake_project_id = uuidutils.generate_uuid() resp = self.app.get('/v2/executions?project_id=%s' % fake_project_id) self.assertEqual(200, resp.status_int) self.assertTrue(mock_get_execs.call_args[1].get('insecure', False)) self.assertTrue( mock_get_execs.call_args[1].get('project_id', fake_project_id) ) mistral-6.0.0/mistral/tests/unit/api/v2/test_services.py0000666000175100017510000000472613245513261023361 0ustar zuulzuul00000000000000# Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg import tooz.coordination from webtest import app as webtest_app from mistral.service import coordination from mistral.tests.unit.api import base class TestServicesController(base.APITest): def test_get_all(self): cfg.CONF.set_default('backend_url', 'zake://', 'coordination') coordination.cleanup_service_coordinator() service_coordinator = coordination.get_service_coordinator( my_id='service1' ) service_coordinator.join_group('api_group') resp = self.app.get('/v2/services') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['services'])) srv_ret = [{"name": "service1", "type": "api_group"}] self.assertItemsEqual(srv_ret, resp.json['services']) def test_get_all_without_backend(self): cfg.CONF.set_default('backend_url', None, 'coordination') coordination.cleanup_service_coordinator() coordination.get_service_coordinator() context = self.assertRaises( webtest_app.AppError, self.app.get, '/v2/services', ) self.assertIn('Service API is not supported', context.args[0]) @mock.patch('mistral.service.coordination.ServiceCoordinator.get_members', side_effect=tooz.coordination.ToozError('error message')) def test_get_all_with_get_members_error(self, mock_get_members): cfg.CONF.set_default('backend_url', 'zake://', 'coordination') coordination.cleanup_service_coordinator() coordination.get_service_coordinator() context = self.assertRaises( webtest_app.AppError, self.app.get, '/v2/services', ) self.assertIn( 'Failed to get service members from coordination backend', context.args[0] ) mistral-6.0.0/mistral/tests/unit/api/v2/test_workbooks.py0000666000175100017510000002215713245513261023554 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime import mock import sqlalchemy as sa from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.services import workbooks from mistral.tests.unit.api import base WORKBOOK_DEF = """ --- version: 2.0 name: 'test' """ UPDATED_WORKBOOK_DEF = """ --- version: 2.0 name: 'book' """ WORKBOOK_DB = models.Workbook( id='123', name='book', definition=WORKBOOK_DEF, tags=['deployment', 'demo'], scope="public", created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1) ) WORKBOOK = { 'id': '123', 'name': 'book', 'definition': WORKBOOK_DEF, 'tags': ['deployment', 'demo'], 'scope': 'public', 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00' } WORKBOOK_DB_PROJECT_ID = WORKBOOK_DB.get_clone() WORKBOOK_DB_PROJECT_ID.project_id = '' UPDATED_WORKBOOK_DB = copy.copy(WORKBOOK_DB) UPDATED_WORKBOOK_DB['definition'] = UPDATED_WORKBOOK_DEF UPDATED_WORKBOOK = copy.deepcopy(WORKBOOK) UPDATED_WORKBOOK['definition'] = UPDATED_WORKBOOK_DEF WB_DEF_INVALID_MODEL_EXCEPTION = """ --- version: '2.0' name: 'book' workflows: flow: type: direct tasks: task1: action: std.echo output="Hi" workflow: wf1 """ WB_DEF_DSL_PARSE_EXCEPTION = """ --- % """ WB_DEF_YAQL_PARSE_EXCEPTION = """ --- version: '2.0' name: 'book' workflows: flow: type: direct tasks: task1: action: std.echo output=<% * %> """ MOCK_WORKBOOK = mock.MagicMock(return_value=WORKBOOK_DB) MOCK_WORKBOOKS = mock.MagicMock(return_value=[WORKBOOK_DB]) MOCK_UPDATED_WORKBOOK = mock.MagicMock(return_value=UPDATED_WORKBOOK_DB) MOCK_DELETE = mock.MagicMock(return_value=None) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_DUPLICATE = mock.MagicMock(side_effect=exc.DBDuplicateEntryError()) class TestWorkbooksController(base.APITest): @mock.patch.object(db_api, "get_workbook", MOCK_WORKBOOK) def test_get(self): resp = self.app.get('/v2/workbooks/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(WORKBOOK, resp.json) @mock.patch.object(db_api, 'get_workbook') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), WORKBOOK_DB # Successful run ] resp = self.app.get('/v2/workbooks/123') self.assertEqual(200, resp.status_int) self.assertDictEqual(WORKBOOK, resp.json) @mock.patch.object(db_api, "get_workbook", MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get('/v2/workbooks/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "get_workbook", return_value=WORKBOOK_DB_PROJECT_ID) def test_get_within_project_id(self, mock_get): resp = self.app.get('/v2/workbooks/123') self.assertEqual(200, resp.status_int) self.assertTrue('project_id' in resp.json) @mock.patch.object(workbooks, "update_workbook_v2", MOCK_UPDATED_WORKBOOK) def test_put(self): resp = self.app.put( '/v2/workbooks', UPDATED_WORKBOOK_DEF, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) self.assertEqual(UPDATED_WORKBOOK, resp.json) @mock.patch.object(workbooks, "update_workbook_v2", MOCK_NOT_FOUND) def test_put_not_found(self): resp = self.app.put_json( '/v2/workbooks', UPDATED_WORKBOOK_DEF, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(404, resp.status_int) def test_put_invalid(self): resp = self.app.put( '/v2/workbooks', WB_DEF_INVALID_MODEL_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Invalid DSL", resp.body.decode()) @mock.patch.object(workbooks, "create_workbook_v2", MOCK_WORKBOOK) def test_post(self): resp = self.app.post( '/v2/workbooks', WORKBOOK_DEF, headers={'Content-Type': 'text/plain'} ) self.assertEqual(201, resp.status_int) self.assertEqual(WORKBOOK, resp.json) @mock.patch.object(workbooks, "create_workbook_v2", MOCK_DUPLICATE) def test_post_dup(self): resp = self.app.post( '/v2/workbooks', WORKBOOK_DEF, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(409, resp.status_int) def test_post_invalid(self): resp = self.app.post( '/v2/workbooks', WB_DEF_INVALID_MODEL_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Invalid DSL", resp.body.decode()) @mock.patch.object(db_api, "delete_workbook", MOCK_DELETE) def test_delete(self): resp = self.app.delete('/v2/workbooks/123') self.assertEqual(204, resp.status_int) @mock.patch.object(db_api, "delete_workbook", MOCK_NOT_FOUND) def test_delete_not_found(self): resp = self.app.delete('/v2/workbooks/123', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "get_workbooks", MOCK_WORKBOOKS) def test_get_all(self): resp = self.app.get('/v2/workbooks') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['workbooks'])) self.assertDictEqual(WORKBOOK, resp.json['workbooks'][0]) @mock.patch.object(db_api, 'get_workbooks') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [WORKBOOK_DB] # Successful run ] resp = self.app.get('/v2/workbooks') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['workbooks'])) self.assertDictEqual(WORKBOOK, resp.json['workbooks'][0]) @mock.patch.object(db_api, "get_workbooks", MOCK_EMPTY) def test_get_all_empty(self): resp = self.app.get('/v2/workbooks') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['workbooks'])) def test_validate(self): resp = self.app.post( '/v2/workbooks/validate', WORKBOOK_DEF, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) self.assertTrue(resp.json['valid']) def test_validate_invalid_model_exception(self): resp = self.app.post( '/v2/workbooks/validate', WB_DEF_INVALID_MODEL_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Invalid DSL", resp.json['error']) def test_validate_dsl_parse_exception(self): resp = self.app.post( '/v2/workbooks/validate', WB_DEF_DSL_PARSE_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Definition could not be parsed", resp.json['error']) def test_validate_yaql_parse_exception(self): resp = self.app.post( '/v2/workbooks/validate', WB_DEF_YAQL_PARSE_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("unexpected '*' at position 1", resp.json['error']) def test_validate_empty(self): resp = self.app.post( '/v2/workbooks/validate', '', headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Invalid DSL", resp.json['error']) mistral-6.0.0/mistral/tests/unit/api/v2/test_cron_triggers.py0000666000175100017510000002116013245513261024374 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import json import mock import sqlalchemy as sa from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.services import security from mistral.tests.unit.api import base from mistral.tests.unit import base as unit_base WF = models.WorkflowDefinition( spec={ 'version': '2.0', 'name': 'my_wf', 'tasks': { 'task1': { 'action': 'std.noop' } } } ) WF.update({'id': '123e4567-e89b-12d3-a456-426655440000', 'name': 'my_wf'}) TRIGGER = { 'id': '02abb422-55ef-4bb2-8cb9-217a583a6a3f', 'name': 'my_cron_trigger', 'pattern': '* * * * *', 'workflow_name': WF.name, 'workflow_id': '123e4567-e89b-12d3-a456-426655440000', 'workflow_input': '{}', 'workflow_params': '{}', 'scope': 'private', 'remaining_executions': 42 } trigger_values = copy.deepcopy(TRIGGER) trigger_values['workflow_input'] = json.loads( trigger_values['workflow_input']) trigger_values['workflow_params'] = json.loads( trigger_values['workflow_params']) TRIGGER_DB = models.CronTrigger() TRIGGER_DB.update(trigger_values) TRIGGER_DB_WITH_PROJECT_ID = TRIGGER_DB.get_clone() TRIGGER_DB_WITH_PROJECT_ID.project_id = '' MOCK_WF = mock.MagicMock(return_value=WF) MOCK_TRIGGER = mock.MagicMock(return_value=TRIGGER_DB) MOCK_TRIGGERS = mock.MagicMock(return_value=[TRIGGER_DB]) MOCK_DELETE = mock.MagicMock(return_value=1) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_DUPLICATE = mock.MagicMock(side_effect=exc.DBDuplicateEntryError()) class TestCronTriggerController(base.APITest): @mock.patch.object(db_api, "get_cron_trigger", MOCK_TRIGGER) def test_get(self): resp = self.app.get('/v2/cron_triggers/my_cron_trigger') self.assertEqual(200, resp.status_int) self.assertDictEqual(TRIGGER, resp.json) @mock.patch.object(db_api, 'get_cron_trigger') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), TRIGGER_DB # Successful run ] resp = self.app.get('/v2/cron_triggers/my_cron_trigger') self.assertEqual(200, resp.status_int) self.assertDictEqual(TRIGGER, resp.json) @mock.patch.object(db_api, "get_cron_trigger", return_value=TRIGGER_DB_WITH_PROJECT_ID) def test_get_within_project_id(self, mock_get): resp = self.app.get('/v2/cron_triggers/my_cron_trigger') self.assertEqual(200, resp.status_int) self.assertTrue('project_id' in resp.json) @mock.patch.object(db_api, "get_cron_trigger", MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get( '/v2/cron_triggers/my_cron_trigger', expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "get_cron_trigger", MOCK_TRIGGER) def test_get_by_id(self): resp = self.app.get( "/v2/cron_triggers/02abb422-55ef-4bb2-8cb9-217a583a6a3f") self.assertEqual(200, resp.status_int) self.assertDictEqual(TRIGGER, resp.json) @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) @mock.patch.object(db_api, "create_cron_trigger") def test_post(self, mock_mtd): mock_mtd.return_value = TRIGGER_DB resp = self.app.post_json('/v2/cron_triggers', TRIGGER) self.assertEqual(201, resp.status_int) self.assertDictEqual(TRIGGER, resp.json) self.assertEqual(1, mock_mtd.call_count) values = mock_mtd.call_args[0][0] self.assertEqual('* * * * *', values['pattern']) self.assertEqual(42, values['remaining_executions']) @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) @mock.patch.object(db_api, "create_cron_trigger", MOCK_DUPLICATE) @mock.patch.object(security, "delete_trust") def test_post_dup(self, delete_trust): resp = self.app.post_json( '/v2/cron_triggers', TRIGGER, expect_errors=True ) self.assertEqual(1, delete_trust.call_count) self.assertEqual(409, resp.status_int) @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) @mock.patch.object(db_api, "create_cron_trigger", MOCK_DUPLICATE) def test_post_same_wf_and_input(self): trig = TRIGGER.copy() trig['name'] = 'some_trigger_name' resp = self.app.post_json( '/v2/cron_triggers', trig, expect_errors=True ) self.assertEqual(409, resp.status_int) @mock.patch.object(db_api, "get_cron_trigger", MOCK_TRIGGER) @mock.patch.object(db_api, "delete_cron_trigger", MOCK_DELETE) @mock.patch.object(security, "delete_trust") def test_delete(self, delete_trust): resp = self.app.delete('/v2/cron_triggers/my_cron_trigger') self.assertEqual(1, delete_trust.call_count) self.assertEqual(204, resp.status_int) @mock.patch.object(db_api, "get_cron_trigger", MOCK_TRIGGER) @mock.patch.object(db_api, "delete_cron_trigger", MOCK_DELETE) @mock.patch.object(security, "delete_trust") def test_delete_by_id(self, delete_trust): resp = self.app.delete( '/v2/cron_triggers/02abb422-55ef-4bb2-8cb9-217a583a6a3f') self.assertEqual(1, delete_trust.call_count) self.assertEqual(204, resp.status_int) @mock.patch.object(db_api, "delete_cron_trigger", MOCK_NOT_FOUND) def test_delete_not_found(self): resp = self.app.delete( '/v2/cron_triggers/my_cron_trigger', expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "get_cron_triggers", MOCK_TRIGGERS) def test_get_all(self): resp = self.app.get('/v2/cron_triggers') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['cron_triggers'])) self.assertDictEqual(TRIGGER, resp.json['cron_triggers'][0]) @mock.patch.object(db_api, 'get_cron_triggers') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [TRIGGER_DB] # Successful run ] resp = self.app.get('/v2/cron_triggers') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['cron_triggers'])) self.assertDictEqual(TRIGGER, resp.json['cron_triggers'][0]) @mock.patch.object(db_api, 'get_cron_triggers') @mock.patch('mistral.context.MistralContext.from_environ') def test_get_all_projects_admin(self, mock_context, mock_get_triggers): admin_ctx = unit_base.get_context(admin=True) mock_context.return_value = admin_ctx resp = self.app.get('/v2/cron_triggers?all_projects=true') self.assertEqual(200, resp.status_int) self.assertTrue(mock_get_triggers.call_args[1].get('insecure', False)) @mock.patch.object(db_api, 'get_cron_triggers') @mock.patch('mistral.context.MistralContext.from_environ') def test_get_all_filter_project(self, mock_context, mock_get_triggers): admin_ctx = unit_base.get_context(admin=True) mock_context.return_value = admin_ctx resp = self.app.get( '/v2/cron_triggers?all_projects=true&' 'project_id=192796e61c174f718d6147b129f3f2ff' ) self.assertEqual(200, resp.status_int) self.assertTrue(mock_get_triggers.call_args[1].get('insecure', False)) self.assertEqual( {'eq': '192796e61c174f718d6147b129f3f2ff'}, mock_get_triggers.call_args[1].get('project_id') ) @mock.patch.object(db_api, "get_cron_triggers", MOCK_EMPTY) def test_get_all_empty(self): resp = self.app.get('/v2/cron_triggers') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['cron_triggers'])) mistral-6.0.0/mistral/tests/unit/api/v2/test_actions.py0000666000175100017510000003505413245513261023174 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import mock import sqlalchemy as sa from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.tests.unit.api import base from mistral import utils ACTION_DEFINITION = """ --- version: '2.0' my_action: description: My super cool action. tags: ['test', 'v2'] base: std.echo base-input: output: "{$.str1}{$.str2}" """ ACTION_DEFINITION_INVALID_NO_BASE = """ --- version: '2.0' my_action: description: My super cool action. tags: ['test', 'v2'] base-input: output: "{$.str1}{$.str2}" """ ACTION_DEFINITION_INVALID_YAQL = """ --- version: '2.0' my_action: description: My super cool action. tags: ['test', 'v2'] base: std.echo base-input: output: <% $. %> """ ACTION_DSL_PARSE_EXCEPTION = """ --- % """ SYSTEM_ACTION_DEFINITION = """ --- version: '2.0' std.echo: base: std.http base-input: url: "some.url" """ ACTION = { 'id': '123e4567-e89b-12d3-a456-426655440000', 'name': 'my_action', 'is_system': False, 'description': 'My super cool action.', 'tags': ['test', 'v2'], 'definition': ACTION_DEFINITION } SYSTEM_ACTION = { 'id': '1234', 'name': 'std.echo', 'is_system': True, 'definition': SYSTEM_ACTION_DEFINITION } ACTION_DB = models.ActionDefinition() ACTION_DB.update(ACTION) SYSTEM_ACTION_DB = models.ActionDefinition() SYSTEM_ACTION_DB.update(SYSTEM_ACTION) PROJECT_ID_ACTION_DB = ACTION_DB.get_clone() PROJECT_ID_ACTION_DB.project_id = '' UPDATED_ACTION_DEFINITION = """ --- version: '2.0' my_action: description: My super cool action. base: std.echo base-input: output: "{$.str1}{$.str2}{$.str3}" """ UPDATED_ACTION_DB = copy.copy(ACTION_DB) UPDATED_ACTION_DB['definition'] = UPDATED_ACTION_DEFINITION UPDATED_ACTION = copy.deepcopy(ACTION) UPDATED_ACTION['definition'] = UPDATED_ACTION_DEFINITION MOCK_ACTION = mock.MagicMock(return_value=ACTION_DB) MOCK_SYSTEM_ACTION = mock.MagicMock(return_value=SYSTEM_ACTION_DB) MOCK_ACTIONS = mock.MagicMock(return_value=[ACTION_DB]) MOCK_UPDATED_ACTION = mock.MagicMock(return_value=UPDATED_ACTION_DB) MOCK_DELETE = mock.MagicMock(return_value=None) MOCK_EMPTY = mock.MagicMock(return_value=[]) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) MOCK_DUPLICATE = mock.MagicMock(side_effect=exc.DBDuplicateEntryError()) class TestActionsController(base.APITest): @mock.patch.object( db_api, "get_action_definition", MOCK_ACTION) def test_get(self): resp = self.app.get('/v2/actions/my_action') self.assertEqual(200, resp.status_int) self.assertDictEqual(ACTION, resp.json) @mock.patch.object(db_api, 'get_action_definition') def test_get_operational_error(self, mocked_get): mocked_get.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), ACTION_DB # Successful run ] resp = self.app.get('/v2/actions/my_action') self.assertEqual(200, resp.status_int) self.assertDictEqual(ACTION, resp.json) @mock.patch.object( db_api, "get_action_definition", MOCK_NOT_FOUND) def test_get_not_found(self): resp = self.app.get('/v2/actions/my_action', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object(db_api, "update_action_definition", MOCK_UPDATED_ACTION) @mock.patch.object( db_api, "get_action_definition", MOCK_ACTION) def test_get_by_id(self): url = '/v2/actions/{0}'.format(ACTION['id']) resp = self.app.get(url) self.assertEqual(200, resp.status_int) self.assertEqual(ACTION['id'], resp.json['id']) @mock.patch.object( db_api, "get_action_definition", MOCK_NOT_FOUND) def test_get_by_id_not_found(self): url = '/v2/actions/1234' resp = self.app.get(url, expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object( db_api, "get_action_definition", return_value=PROJECT_ID_ACTION_DB) def test_get_within_project_id(self, mock_get): url = '/v2/actions/1234' resp = self.app.get(url, expect_errors=True) self.assertEqual(200, resp.status_int) self.assertTrue('project_id' in resp.json) @mock.patch.object( db_api, "get_action_definition", MOCK_ACTION) @mock.patch.object( db_api, "update_action_definition", MOCK_UPDATED_ACTION ) def test_put(self): resp = self.app.put( '/v2/actions', UPDATED_ACTION_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) self.assertEqual({"actions": [UPDATED_ACTION]}, resp.json) @mock.patch.object(db_api, "load_action_definition", MOCK_ACTION) @mock.patch.object(db_api, "update_action_definition") def test_put_public(self, mock_mtd): mock_mtd.return_value = UPDATED_ACTION_DB resp = self.app.put( '/v2/actions?scope=public', UPDATED_ACTION_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) self.assertEqual({"actions": [UPDATED_ACTION]}, resp.json) self.assertEqual("public", mock_mtd.call_args[0][1]['scope']) @mock.patch.object(db_api, "update_action_definition", MOCK_NOT_FOUND) def test_put_not_found(self): resp = self.app.put( '/v2/actions', UPDATED_ACTION_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(404, resp.status_int) @mock.patch.object( db_api, "get_action_definition", MOCK_SYSTEM_ACTION) def test_put_system(self): resp = self.app.put( '/v2/actions', SYSTEM_ACTION_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( 'Attempt to modify a system action: std.echo', resp.body.decode() ) @mock.patch.object(db_api, "create_action_definition") def test_post(self, mock_mtd): mock_mtd.return_value = ACTION_DB resp = self.app.post( '/v2/actions', ACTION_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(201, resp.status_int) self.assertEqual({"actions": [ACTION]}, resp.json) self.assertEqual(1, mock_mtd.call_count) values = mock_mtd.call_args[0][0] self.assertEqual('My super cool action.', values['description']) spec = values['spec'] self.assertIsNotNone(spec) self.assertEqual(ACTION_DB.name, spec['name']) @mock.patch.object(db_api, "create_action_definition") def test_post_public(self, mock_mtd): mock_mtd.return_value = ACTION_DB resp = self.app.post( '/v2/actions?scope=public', ACTION_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(201, resp.status_int) self.assertEqual({"actions": [ACTION]}, resp.json) self.assertEqual("public", mock_mtd.call_args[0][0]['scope']) @mock.patch.object(db_api, "create_action_definition") def test_post_wrong_scope(self, mock_mtd): mock_mtd.return_value = ACTION_DB resp = self.app.post( '/v2/actions?scope=unique', ACTION_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Scope must be one of the following", resp.body.decode()) @mock.patch.object(db_api, "create_action_definition", MOCK_DUPLICATE) def test_post_dup(self): resp = self.app.post( '/v2/actions', ACTION_DEFINITION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(409, resp.status_int) @mock.patch.object( db_api, "get_action_definition", MOCK_ACTION) @mock.patch.object(db_api, "delete_action_definition", MOCK_DELETE) def test_delete(self): resp = self.app.delete('/v2/actions/my_action') self.assertEqual(204, resp.status_int) @mock.patch.object(db_api, "delete_action_definition", MOCK_NOT_FOUND) def test_delete_not_found(self): resp = self.app.delete('/v2/actions/my_action', expect_errors=True) self.assertEqual(404, resp.status_int) @mock.patch.object( db_api, "get_action_definition", MOCK_SYSTEM_ACTION) def test_delete_system(self): resp = self.app.delete('/v2/actions/std.echo', expect_errors=True) self.assertEqual(400, resp.status_int) self.assertIn('Attempt to delete a system action: std.echo', resp.json['faultstring']) @mock.patch.object( db_api, "get_action_definitions", MOCK_ACTIONS) def test_get_all(self): resp = self.app.get('/v2/actions') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['actions'])) self.assertDictEqual(ACTION, resp.json['actions'][0]) @mock.patch.object(db_api, 'get_action_definitions') def test_get_all_operational_error(self, mocked_get_all): mocked_get_all.side_effect = [ # Emulating DB OperationalError sa.exc.OperationalError('Mock', 'mock', 'mock'), [ACTION_DB] # Successful run ] resp = self.app.get('/v2/actions') self.assertEqual(200, resp.status_int) self.assertEqual(1, len(resp.json['actions'])) self.assertDictEqual(ACTION, resp.json['actions'][0]) @mock.patch.object( db_api, "get_action_definitions", MOCK_EMPTY) def test_get_all_empty(self): resp = self.app.get('/v2/actions') self.assertEqual(200, resp.status_int) self.assertEqual(0, len(resp.json['actions'])) @mock.patch.object( db_api, "get_action_definitions", MOCK_ACTIONS) def test_get_all_pagination(self): resp = self.app.get( '/v2/actions?limit=1&sort_keys=id,name') self.assertEqual(200, resp.status_int) self.assertIn('next', resp.json) self.assertEqual(1, len(resp.json['actions'])) self.assertDictEqual(ACTION, resp.json['actions'][0]) param_dict = utils.get_dict_from_string( resp.json['next'].split('?')[1], delimiter='&' ) expected_dict = { 'marker': '123e4567-e89b-12d3-a456-426655440000', 'limit': 1, 'sort_keys': 'id,name', 'sort_dirs': 'asc,asc' } self.assertTrue( set(expected_dict.items()).issubset(set(param_dict.items())) ) def test_get_all_pagination_limit_negative(self): resp = self.app.get( '/v2/actions?limit=-1&sort_keys=id,name&sort_dirs=asc,asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Limit must be positive", resp.body.decode()) def test_get_all_pagination_limit_not_integer(self): resp = self.app.get( '/v2/actions?limit=1.1&sort_keys=id,name&sort_dirs=asc,asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("unable to convert to int", resp.body.decode()) def test_get_all_pagination_invalid_sort_dirs_length(self): resp = self.app.get( '/v2/actions?limit=1&sort_keys=id,name&sort_dirs=asc,asc,asc', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn( "Length of sort_keys must be equal or greater than sort_dirs", resp.body.decode() ) def test_get_all_pagination_unknown_direction(self): resp = self.app.get( '/v2/actions?limit=1&sort_keys=id&sort_dirs=nonexist', expect_errors=True ) self.assertEqual(400, resp.status_int) self.assertIn("Unknown sort direction", resp.body.decode()) def test_validate(self): resp = self.app.post( '/v2/actions/validate', ACTION_DEFINITION, headers={'Content-Type': 'text/plain'} ) self.assertEqual(200, resp.status_int) self.assertTrue(resp.json['valid']) def test_validate_invalid_model_exception(self): resp = self.app.post( '/v2/actions/validate', ACTION_DEFINITION_INVALID_NO_BASE, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Invalid DSL", resp.json['error']) def test_validate_dsl_parse_exception(self): resp = self.app.post( '/v2/actions/validate', ACTION_DSL_PARSE_EXCEPTION, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Definition could not be parsed", resp.json['error']) def test_validate_yaql_parse_exception(self): resp = self.app.post( '/v2/actions/validate', ACTION_DEFINITION_INVALID_YAQL, headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("unexpected end of statement", resp.json['error']) def test_validate_empty(self): resp = self.app.post( '/v2/actions/validate', '', headers={'Content-Type': 'text/plain'}, expect_errors=True ) self.assertEqual(200, resp.status_int) self.assertFalse(resp.json['valid']) self.assertIn("Invalid DSL", resp.json['error']) mistral-6.0.0/mistral/tests/unit/api/test_policies.py0000666000175100017510000000440413245513261023007 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral.tests.unit.api import base from mistral.tests.unit.mstrlfixtures import policy_fixtures WF_DEFINITION = """ --- version: '2.0' flow: type: direct input: - param1 tasks: task1: action: std.echo output="Hi" """ WF_DB = models.WorkflowDefinition( id='123e4567-e89b-12d3-a456-426655440000', name='flow', definition=WF_DEFINITION, created_at=datetime.datetime(1970, 1, 1), updated_at=datetime.datetime(1970, 1, 1), spec={'input': ['param1']} ) WF = { 'id': '123e4567-e89b-12d3-a456-426655440000', 'name': 'flow', 'definition': WF_DEFINITION, 'created_at': '1970-01-01 00:00:00', 'updated_at': '1970-01-01 00:00:00', 'input': 'param1' } MOCK_WF = mock.MagicMock(return_value=WF_DB) class TestPolicies(base.APITest): @mock.patch.object(db_api, "get_workflow_definition", MOCK_WF) def get(self): resp = self.app.get('/v2/workflows/123', expect_errors=True) return resp.status_int def test_disable_workflow_api(self): self.policy = self.useFixture(policy_fixtures.PolicyFixture()) rules = {"workflows:get": "role:FAKE"} self.policy.change_policy_definition(rules) response_value = self.get() self.assertEqual(403, response_value) def test_enable_workflow_api(self): self.policy = self.useFixture(policy_fixtures.PolicyFixture()) rules = {"workflows:get": "role:FAKE or rule:admin_or_owner"} self.policy.change_policy_definition(rules) response_value = self.get() self.assertEqual(200, response_value) mistral-6.0.0/mistral/tests/unit/api/test_resource_base.py0000666000175100017510000000424613245513261024025 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime from mistral.api.controllers.v2 import resources from mistral.db.v2 import api as db_api from mistral.tests.unit import base from mistral import utils WF_EXEC = { 'id': 'c0f3be41-88b9-4c86-a669-83e77cd0a1b8', 'spec': {}, 'params': {'task': 'my_task1'}, 'project_id': '', 'scope': 'PUBLIC', 'state': 'IDLE', 'state_info': "Running...", 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0), 'updated_at': None, 'context': None, 'task_execution_id': None, 'description': None, 'output': None, 'accepted': False, 'some_invalid_field': "foobar" } class TestRestResource(base.DbTestCase): def test_from_db_model(self): wf_ex = db_api.create_workflow_execution(WF_EXEC) self.assertIsNotNone(wf_ex) wf_ex_resource = resources.Execution.from_db_model(wf_ex) self.assertIsNotNone(wf_ex_resource) expected = copy.copy(WF_EXEC) del expected['some_invalid_field'] utils.datetime_to_str_in_dict(expected, 'created_at') self.assertDictEqual(expected, wf_ex.to_dict()) def test_from_dict(self): wf_ex = db_api.create_workflow_execution(WF_EXEC) self.assertIsNotNone(wf_ex) wf_ex_resource = resources.Execution.from_dict(wf_ex.to_dict()) self.assertIsNotNone(wf_ex_resource) expected = copy.copy(WF_EXEC) del expected['some_invalid_field'] utils.datetime_to_str_in_dict(expected, 'created_at') self.assertDictEqual(expected, wf_ex.to_dict()) mistral-6.0.0/mistral/tests/unit/api/base.py0000666000175100017510000000457213245513261021061 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import pecan import pecan.testing from webtest import app as webtest_app from mistral.api import app as pecan_app from mistral.services import periodic from mistral.tests.unit import base from mistral.tests.unit.mstrlfixtures import policy_fixtures class APITest(base.DbTestCase): def setUp(self): super(APITest, self).setUp() self.override_config('auth_enable', False, group='pecan') self.override_config('enabled', False, group='cron_trigger') self.app = pecan.testing.load_test_app( dict(pecan_app.get_pecan_config()) ) # Adding cron trigger thread clean up explicitly in case if # new tests will provide an alternative configuration for pecan # application. self.addCleanup(periodic.stop_all_periodic_tasks) # Make sure the api get the correct context. self.patch_ctx = mock.patch( 'mistral.context.MistralContext.from_environ' ) self.mock_ctx = self.patch_ctx.start() self.mock_ctx.return_value = self.ctx self.addCleanup(self.patch_ctx.stop) self.policy = self.useFixture(policy_fixtures.PolicyFixture()) def assertNotFound(self, url): try: self.app.get(url, headers={'Accept': 'application/json'}) except webtest_app.AppError as error: self.assertIn('Bad response: 404 Not Found', str(error)) return self.fail('Expected 404 Not found but got OK') def assertUnauthorized(self, url): try: self.app.get(url, headers={'Accept': 'application/json'}) except webtest_app.AppError as error: self.assertIn('Bad response: 401 Unauthorized', str(error)) return self.fail('Expected 401 Unauthorized but got OK') mistral-6.0.0/mistral/tests/unit/api/__init__.py0000666000175100017510000000000013245513261021664 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/api/test_cors_middleware.py0000666000175100017510000000615313245513261024346 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests cors middleware.""" from mistral.tests.unit.api import base from oslo_config import cfg from oslo_middleware import cors as cors_middleware class TestCORSMiddleware(base.APITest): """Provide a basic smoke test to ensure CORS middleware is active. The tests below provide minimal confirmation that the CORS middleware is active, and may be configured. For comprehensive tests, please consult the test suite in oslo_middleware. """ def setUp(self): # Make sure the CORS options are registered cfg.CONF.register_opts(cors_middleware.CORS_OPTS, 'cors') # Load up our valid domain values before the application is created. self.override_config( "allowed_origin", "http://valid.example.com", group='cors' ) # Create the application. super(TestCORSMiddleware, self).setUp() def test_valid_cors_options_request(self): response = self.app.options( '/', headers={ 'Origin': 'http://valid.example.com', 'Access-Control-Request-Method': 'GET' } ) self.assertEqual(200, response.status_code) self.assertIn('access-control-allow-origin', response.headers) self.assertEqual( 'http://valid.example.com', response.headers['access-control-allow-origin'] ) def test_invalid_cors_options_request(self): response = self.app.options( '/', headers={ 'Origin': 'http://invalid.example.com', 'Access-Control-Request-Method': 'GET' } ) self.assertEqual(200, response.status_code) self.assertNotIn('access-control-allow-origin', response.headers) def test_valid_cors_get_request(self): response = self.app.get( '/', headers={ 'Origin': 'http://valid.example.com' } ) self.assertEqual(200, response.status_code) self.assertIn('access-control-allow-origin', response.headers) self.assertEqual( 'http://valid.example.com', response.headers['access-control-allow-origin'] ) def test_invalid_cors_get_request(self): response = self.app.get( '/', headers={ 'Origin': 'http://invalid.example.com' } ) self.assertEqual(200, response.status_code) self.assertNotIn('access-control-allow-origin', response.headers) mistral-6.0.0/mistral/tests/unit/api/test_service.py0000666000175100017510000000512213245513261022636 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_concurrency import processutils from oslo_config import cfg from mistral.api import service from mistral.tests.unit import base class TestWSGIService(base.BaseTest): def setUp(self): super(TestWSGIService, self).setUp() self.override_config('enabled', False, group='cron_trigger') @mock.patch.object(service.wsgi, 'Server') def test_workers_set_default(self, wsgi_server): service_name = "mistral_api" with mock.patch('mistral.api.app.setup_app'): test_service = service.WSGIService(service_name) wsgi_server.assert_called_once_with( cfg.CONF, service_name, test_service.app, host='0.0.0.0', port=8989, use_ssl=False ) def test_workers_set_correct_setting(self): self.override_config('api_workers', 8, group='api') with mock.patch('mistral.api.app.setup_app'): test_service = service.WSGIService("mistral_api") self.assertEqual(8, test_service.workers) def test_workers_set_zero_setting(self): self.override_config('api_workers', 0, group='api') with mock.patch('mistral.api.app.setup_app'): test_service = service.WSGIService("mistral_api") self.assertEqual( processutils.get_worker_count(), test_service.workers ) @mock.patch.object(service.wsgi, 'Server') def test_wsgi_service_with_ssl_enabled(self, wsgi_server): self.override_config('enable_ssl_api', True, group='api') service_name = 'mistral_api' with mock.patch('mistral.api.app.setup_app'): srv = service.WSGIService(service_name) wsgi_server.assert_called_once_with( cfg.CONF, service_name, srv.app, host='0.0.0.0', port=8989, use_ssl=True ) mistral-6.0.0/mistral/tests/unit/api/test_auth.py0000666000175100017510000000427213245513261022144 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime from oslo_utils import timeutils from oslo_utils import uuidutils import pecan import pecan.testing from mistral.api import app as pecan_app from mistral.tests.unit.api import base WORKBOOKS = [ { u'name': u'my_workbook', u'description': u'My cool Mistral workbook', u'scope': None, u'tags': [u'deployment', u'demo'] } ] PKI_TOKEN_VERIFIED = { 'token': { 'methods': ['password'], 'roles': [{'id': uuidutils.generate_uuid(dashed=False), 'name': 'admin'}], 'expires_at': datetime.datetime.isoformat( datetime.datetime.utcnow() + datetime.timedelta(seconds=60) ), 'project': { 'domain': {'id': 'default', 'name': 'Default'}, 'id': uuidutils.generate_uuid(dashed=False), 'name': 'Mistral' }, 'catalog': [], 'extras': {}, 'user': { 'domain': {'id': 'default', 'name': 'Default'}, 'id': uuidutils.generate_uuid(dashed=False), 'name': 'admin' }, 'issued_at': datetime.datetime.isoformat(timeutils.utcnow()) } } class TestKeystoneMiddleware(base.APITest): """Test keystone middleware AuthProtocol. It checks that keystone middleware AuthProtocol is executed when enabled. """ def setUp(self): super(TestKeystoneMiddleware, self).setUp() self.override_config('auth_enable', True, group='pecan') self.override_config('enabled', False, group='cron_trigger') self.app = pecan.testing.load_test_app( dict(pecan_app.get_pecan_config()) ) mistral-6.0.0/mistral/tests/unit/db/0000775000175100017510000000000013245513604017400 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/db/v2/0000775000175100017510000000000013245513604017727 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/db/v2/test_sqlite_transactions.py0000666000175100017510000000513213245513261025433 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet from eventlet import semaphore from oslo_config import cfg import testtools from mistral.db.v2.sqlalchemy import api as db_api from mistral.tests.unit import base as test_base WF_EXEC = { 'name': '1', 'spec': {}, 'start_params': {}, 'state': 'RUNNING', 'state_info': "Running...", 'created_at': None, 'updated_at': None, 'context': None, 'task_id': None, 'trust_id': None } @testtools.skipIf( 'sqlite' not in cfg.CONF.database.connection, 'SQLite is not used for the database backend.') class SQLiteTransactionsTest(test_base.DbTestCase): """The purpose of this test is to research transactions of SQLite.""" def setUp(self): super(SQLiteTransactionsTest, self).setUp() cfg.CONF.set_default('auth_enable', True, group='pecan') self.addCleanup( cfg.CONF.set_default, 'auth_enable', False, group='pecan' ) def test_dirty_reads(self): sem1 = semaphore.Semaphore(0) sem2 = semaphore.Semaphore(0) def _run_tx1(): with db_api.transaction(): wf_ex = db_api.create_workflow_execution(WF_EXEC) # Release TX2 so it can read data. sem2.release() print("Created: %s" % wf_ex) print("Holding TX1...") sem1.acquire() print("TX1 completed.") def _run_tx2(): with db_api.transaction(): print("Holding TX2...") sem2.acquire() wf_execs = db_api.get_workflow_executions() print("Read: %s" % wf_execs) self.assertEqual(1, len(wf_execs)) # Release TX1 so it can complete. sem1.release() print("TX2 completed.") t1 = eventlet.spawn(_run_tx1) t2 = eventlet.spawn(_run_tx2) t1.wait() t2.wait() t1.kill() t2.kill() mistral-6.0.0/mistral/tests/unit/db/v2/test_sqlalchemy_db_api.py0000666000175100017510000027743013245513261025016 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # TODO(rakhmerov): Add checks for timestamps. import copy import datetime from oslo_config import cfg from mistral import context as auth_context from mistral.db.v2.sqlalchemy import api as db_api from mistral.db.v2.sqlalchemy import models as db_models from mistral import exceptions as exc from mistral.services import security from mistral.tests.unit import base as test_base from mistral.utils import filter_utils DEFAULT_CTX = test_base.get_context() USER_CTX = test_base.get_context(default=False) ADM_CTX = test_base.get_context(default=False, admin=True) WORKBOOKS = [ { 'name': 'my_workbook1', 'definition': 'empty', 'spec': {}, 'tags': ['mc'], 'scope': 'public', 'updated_at': None, 'project_id': '1233', 'trust_id': '1234', 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0) }, { 'name': 'my_workbook2', 'description': 'my description', 'definition': 'empty', 'spec': {}, 'tags': ['mc'], 'scope': 'private', 'updated_at': None, 'project_id': '1233', 'trust_id': '12345', 'created_at': datetime.datetime(2016, 12, 1, 15, 1, 0) }, ] class SQLAlchemyTest(test_base.DbTestCase): def setUp(self): super(SQLAlchemyTest, self).setUp() cfg.CONF.set_default('auth_enable', True, group='pecan') self.addCleanup(cfg.CONF.set_default, 'auth_enable', False, group='pecan') class WorkbookTest(SQLAlchemyTest): def test_create_and_get_and_load_workbook(self): created = db_api.create_workbook(WORKBOOKS[0]) fetched = db_api.get_workbook(created['name']) self.assertEqual(created, fetched) fetched = db_api.load_workbook(created.name) self.assertEqual(created, fetched) self.assertIsNone(db_api.load_workbook("not-existing-wb")) def test_create_workbook_duplicate_without_auth(self): cfg.CONF.set_default('auth_enable', False, group='pecan') db_api.create_workbook(WORKBOOKS[0]) self.assertRaises( exc.DBDuplicateEntryError, db_api.create_workbook, WORKBOOKS[0] ) def test_update_workbook(self): created = db_api.create_workbook(WORKBOOKS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_workbook( created.name, {'definition': 'my new definition'} ) self.assertEqual('my new definition', updated.definition) fetched = db_api.get_workbook(created['name']) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_create_or_update_workbook(self): name = WORKBOOKS[0]['name'] self.assertIsNone(db_api.load_workbook(name)) created = db_api.create_or_update_workbook( name, WORKBOOKS[0] ) self.assertIsNotNone(created) self.assertIsNotNone(created.name) updated = db_api.create_or_update_workbook( created.name, {'definition': 'my new definition'} ) self.assertEqual('my new definition', updated.definition) self.assertEqual( 'my new definition', db_api.load_workbook(updated.name).definition ) fetched = db_api.get_workbook(created.name) self.assertEqual(updated, fetched) def test_get_workbooks(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) fetched = db_api.get_workbooks() self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_workbooks_by_equal_value(self): db_api.create_workbook(WORKBOOKS[0]) created = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'name', created.name, 'eq' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created, fetched[0]) def test_filter_workbooks_by_not_equal_value(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'name', created0.name, 'neq' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workbooks_by_greater_than_value(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gt' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workbooks_by_greater_than_equal_value(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gte' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_workbooks_by_less_than_value(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lt' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_workbooks_by_less_than_equal_value(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lte' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_workbooks_by_values_in_list(self): created0 = db_api.create_workbook(WORKBOOKS[0]) db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at']], 'in' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_workbooks_by_values_notin_list(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at']], 'nin' ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workbooks_by_multiple_columns(self): created0 = db_api.create_workbook(WORKBOOKS[0]) created1 = db_api.create_workbook(WORKBOOKS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at'], created1['created_at']], 'in' ) _filter = filter_utils.create_or_update_filter( 'name', 'my_workbook2', 'eq', _filter ) fetched = db_api.get_workbooks(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_delete_workbook(self): created = db_api.create_workbook(WORKBOOKS[0]) fetched = db_api.get_workbook(created.name) self.assertEqual(created, fetched) db_api.delete_workbook(created.name) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workbook, created.name ) def test_workbooks_in_two_projects(self): created = db_api.create_workbook(WORKBOOKS[1]) fetched = db_api.get_workbooks() self.assertEqual(1, len(fetched)) self.assertEqual(created, fetched[0]) # Create a new user. auth_context.set_ctx(USER_CTX) created = db_api.create_workbook(WORKBOOKS[1]) fetched = db_api.get_workbooks() self.assertEqual(1, len(fetched)) self.assertEqual(created, fetched[0]) def test_workbook_private(self): # Create a workbook(scope=private) as under one project # then make sure it's NOT visible for other projects. created1 = db_api.create_workbook(WORKBOOKS[1]) fetched = db_api.get_workbooks() self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) # Create a new user. auth_context.set_ctx(USER_CTX) fetched = db_api.get_workbooks() self.assertEqual(0, len(fetched)) def test_workbook_public(self): # Create a workbook(scope=public) as under one project # then make sure it's visible for other projects. created0 = db_api.create_workbook(WORKBOOKS[0]) fetched = db_api.get_workbooks() self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) # Assert that the project_id stored is actually the context's # project_id not the one given. self.assertEqual(created0.project_id, auth_context.ctx().project_id) self.assertNotEqual(WORKBOOKS[0]['project_id'], auth_context.ctx().project_id) # Create a new user. auth_context.set_ctx(USER_CTX) fetched = db_api.get_workbooks() self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) self.assertEqual('public', created0.scope) def test_workbook_repr(self): s = db_api.create_workbook(WORKBOOKS[0]).__repr__() self.assertIn('Workbook ', s) self.assertIn("'name': 'my_workbook1'", s) WF_DEFINITIONS = [ { 'name': 'my_wf1', 'definition': 'empty', 'spec': {}, 'tags': ['mc'], 'scope': 'public', 'project_id': '1233', 'trust_id': '1234', 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0), 'namespace': '' }, { 'name': 'my_wf2', 'definition': 'empty', 'spec': {}, 'tags': ['mc'], 'scope': 'private', 'project_id': '1233', 'trust_id': '12345', 'created_at': datetime.datetime(2016, 12, 1, 15, 1, 0), 'namespace': '' }, ] CRON_TRIGGER = { 'name': 'trigger1', 'pattern': '* * * * *', 'workflow_name': 'my_wf1', 'workflow_id': None, 'workflow_input': {}, 'next_execution_time': datetime.datetime.now() + datetime.timedelta(days=1), 'remaining_executions': 42, 'scope': 'private', 'project_id': '' } class WorkflowDefinitionTest(SQLAlchemyTest): def test_create_and_get_and_load_workflow_definition(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) fetched = db_api.get_workflow_definition(created.name) self.assertEqual(created, fetched) fetched = db_api.load_workflow_definition(created.name) self.assertEqual(created, fetched) self.assertIsNone(db_api.load_workflow_definition("not-existing-wf")) def test_get_workflow_definition_with_uuid(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) fetched = db_api.get_workflow_definition(created.id) self.assertEqual(created, fetched) def test_get_workflow_definition_by_admin(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) # Switch to admin project. auth_context.set_ctx(test_base.get_context(default=False, admin=True)) fetched = db_api.get_workflow_definition(created.id) self.assertEqual(created, fetched) def test_filter_workflow_definitions_by_equal_value(self): db_api.create_workbook(WF_DEFINITIONS[0]) created = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'name', created.name, 'eq' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created, fetched[0]) def test_filter_workflow_definition_by_not_equal_valiue(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'name', created0.name, 'neq' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workflow_definition_by_greater_than_value(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gt' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workflow_definition_by_greater_than_equal_value(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gte' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_workflow_definition_by_less_than_value(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lt' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_workflow_definition_by_less_than_equal_value(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lte' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_workflow_definition_by_values_in_list(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at']], 'in' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_workflow_definition_by_values_notin_list(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at']], 'nin' ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workflow_definition_by_multiple_columns(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at'], created1['created_at']], 'in' ) _filter = filter_utils.create_or_update_filter( 'name', 'my_wf2', 'eq', _filter ) fetched = db_api.get_workflow_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_create_workflow_definition_duplicate_without_auth(self): cfg.CONF.set_default('auth_enable', False, group='pecan') db_api.create_workflow_definition(WF_DEFINITIONS[0]) self.assertRaises( exc.DBDuplicateEntryError, db_api.create_workflow_definition, WF_DEFINITIONS[0] ) def test_update_workflow_definition(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) self.assertIsNone(created.updated_at) # Update workflow using workflow name as identifier. updated = db_api.update_workflow_definition( created['name'], {'definition': 'my new definition', 'scope': 'private'} ) self.assertEqual('my new definition', updated.definition) fetched = db_api.get_workflow_definition(created.name) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) # Update workflow using workflow uuid as identifier. updated = db_api.update_workflow_definition( created['id'], { 'name': 'updated_name', 'definition': 'my new definition', 'scope': 'private' } ) self.assertEqual('updated_name', updated.name) self.assertEqual('my new definition', updated.definition) fetched = db_api.get_workflow_definition(created['id']) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_update_other_project_workflow_definition(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) # Switch to another project. auth_context.set_ctx(USER_CTX) self.assertRaises( exc.NotAllowedException, db_api.update_workflow_definition, created.name, {'definition': 'my new definition', 'scope': 'private'} ) def test_update_other_project_workflow_by_admin(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[1]) # Switch to admin. auth_context.set_ctx(ADM_CTX) updated = db_api.update_workflow_definition( created['id'], { 'definition': 'my new definition', 'scope': 'public', } ) self.assertEqual('my new definition', updated.definition) # Switch back. auth_context.set_ctx(DEFAULT_CTX) fetched = db_api.get_workflow_definition(created['id']) self.assertEqual(updated, fetched) def test_update_system_workflow_by_admin(self): system_workflow = copy.deepcopy(WF_DEFINITIONS[0]) system_workflow['is_system'] = True created = db_api.create_workflow_definition(system_workflow) # Switch to admin. auth_context.set_ctx(ADM_CTX) updated = db_api.update_workflow_definition( created['id'], { 'definition': 'my new definition', 'scope': 'public' } ) self.assertEqual('my new definition', updated.definition) def test_create_or_update_workflow_definition(self): name = WF_DEFINITIONS[0]['name'] self.assertIsNone(db_api.load_workflow_definition(name)) created = db_api.create_or_update_workflow_definition( name, WF_DEFINITIONS[0] ) self.assertIsNotNone(created) self.assertIsNotNone(created.name) updated = db_api.create_or_update_workflow_definition( created.name, {'definition': 'my new definition', 'scope': 'private'} ) self.assertEqual('my new definition', updated.definition) self.assertEqual( 'my new definition', db_api.load_workflow_definition(updated.name).definition ) fetched = db_api.get_workflow_definition(created.name) self.assertEqual(updated, fetched) def test_update_wf_scope_cron_trigger_associated_in_diff_tenant(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) # Create a new user. auth_context.set_ctx(USER_CTX) cron_trigger = copy.copy(CRON_TRIGGER) cron_trigger['workflow_id'] = created.id db_api.create_cron_trigger(cron_trigger) auth_context.set_ctx(DEFAULT_CTX) self.assertRaises( exc.NotAllowedException, db_api.update_workflow_definition, created['name'], {'scope': 'private'} ) def test_update_wf_scope_event_trigger_associated_in_diff_tenant(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) # Switch to another user. auth_context.set_ctx(USER_CTX) event_trigger = copy.copy(EVENT_TRIGGERS[0]) event_trigger.update({'workflow_id': created.id}) db_api.create_event_trigger(event_trigger) # Switch back. auth_context.set_ctx(DEFAULT_CTX) self.assertRaises( exc.NotAllowedException, db_api.update_workflow_definition, created.id, {'scope': 'private'} ) def test_update_wf_scope_event_trigger_associated_in_same_tenant(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) event_trigger = copy.copy(EVENT_TRIGGERS[0]) event_trigger.update({'workflow_id': created.id}) db_api.create_event_trigger(event_trigger) updated = db_api.update_workflow_definition( created.id, {'scope': 'private'} ) self.assertEqual('private', updated.scope) def test_update_wf_scope_cron_trigger_associated_in_same_tenant(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) cron_trigger = copy.copy(CRON_TRIGGER) cron_trigger.update({'workflow_id': created.id}) db_api.create_cron_trigger(cron_trigger) updated = db_api.update_workflow_definition( created['name'], {'scope': 'private'} ) self.assertEqual('private', updated.scope) def test_get_workflow_definitions(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) fetched0 = db_api.load_workflow_definition(created0.name) fetched1 = db_api.load_workflow_definition(created1.name) self.assertEqual(security.get_project_id(), fetched0.project_id) self.assertEqual(security.get_project_id(), fetched1.project_id) fetched = db_api.get_workflow_definitions() self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_delete_workflow_definition(self): created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) fetched0 = db_api.get_workflow_definition(created0.name) fetched1 = db_api.get_workflow_definition(created1.id) self.assertEqual(created0, fetched0) self.assertEqual(created1, fetched1) for identifier in [created0.name, created1.id]: db_api.delete_workflow_definition(identifier) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workflow_definition, identifier ) def test_delete_workflow_definition_has_event_trigger(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[1]) event_trigger = copy.copy(EVENT_TRIGGERS[0]) event_trigger['workflow_id'] = created.id trigger = db_api.create_event_trigger(event_trigger) self.assertEqual(trigger.workflow_id, created.id) self.assertRaises( exc.DBError, db_api.delete_workflow_definition, created.id ) def test_delete_other_project_workflow_definition(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) # Switch to another project. auth_context.set_ctx(USER_CTX) self.assertRaises( exc.NotAllowedException, db_api.delete_workflow_definition, created.name ) def test_delete_other_project_workflow_definition_by_admin(self): created = db_api.create_workflow_definition(WF_DEFINITIONS[0]) # Switch to admin. auth_context.set_ctx(ADM_CTX) db_api.delete_workflow_definition(created['id']) # Switch back. auth_context.set_ctx(DEFAULT_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workflow_definition, created['id'] ) def test_workflow_definition_private(self): # Create a workflow(scope=private) as under one project # then make sure it's NOT visible for other projects. created1 = db_api.create_workflow_definition(WF_DEFINITIONS[1]) fetched = db_api.get_workflow_definitions() self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) # Create a new user. auth_context.set_ctx(USER_CTX) fetched = db_api.get_workflow_definitions() self.assertEqual(0, len(fetched)) def test_workflow_definition_public(self): # Create a workflow(scope=public) as under one project # then make sure it's visible for other projects. created0 = db_api.create_workflow_definition(WF_DEFINITIONS[0]) fetched = db_api.get_workflow_definitions() self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) # Assert that the project_id stored is actually the context's # project_id not the one given. self.assertEqual(created0.project_id, auth_context.ctx().project_id) self.assertNotEqual( WF_DEFINITIONS[0]['project_id'], auth_context.ctx().project_id ) # Create a new user. auth_context.set_ctx(USER_CTX) fetched = db_api.get_workflow_definitions() self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) self.assertEqual('public', fetched[0].scope) def test_workflow_definition_repr(self): s = db_api.create_workflow_definition(WF_DEFINITIONS[0]).__repr__() self.assertIn('WorkflowDefinition ', s) self.assertIn("'name': 'my_wf1'", s) ACTION_DEFINITIONS = [ { 'name': 'action1', 'description': 'Action #1', 'is_system': True, 'action_class': 'mypackage.my_module.Action1', 'attributes': None, 'project_id': '', 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0) }, { 'name': 'action2', 'description': 'Action #2', 'is_system': True, 'action_class': 'mypackage.my_module.Action2', 'attributes': None, 'project_id': '', 'created_at': datetime.datetime(2016, 12, 1, 15, 1, 0) }, { 'name': 'action3', 'description': 'Action #3', 'is_system': False, 'tags': ['mc', 'abc'], 'action_class': 'mypackage.my_module.Action3', 'attributes': None, 'project_id': '', 'created_at': datetime.datetime(2016, 12, 1, 15, 2, 0) }, ] class ActionDefinitionTest(SQLAlchemyTest): def setUp(self): super(ActionDefinitionTest, self).setUp() db_api.delete_action_definitions() def test_create_and_get_and_load_action_definition(self): created = db_api.create_action_definition(ACTION_DEFINITIONS[0]) fetched = db_api.get_action_definition(created.name) self.assertEqual(created, fetched) fetched = db_api.load_action_definition(created.name) self.assertEqual(created, fetched) self.assertIsNone(db_api.load_action_definition("not-existing-id")) def test_get_action_definition_with_uuid(self): created = db_api.create_action_definition(ACTION_DEFINITIONS[0]) fetched = db_api.get_action_definition(created.id) self.assertEqual(created, fetched) def test_create_action_definition_duplicate_without_auth(self): cfg.CONF.set_default('auth_enable', False, group='pecan') db_api.create_action_definition(ACTION_DEFINITIONS[0]) self.assertRaises( exc.DBDuplicateEntryError, db_api.create_action_definition, ACTION_DEFINITIONS[0] ) def test_filter_action_definitions_by_equal_value(self): db_api.create_action_definition(ACTION_DEFINITIONS[0]) db_api.create_action_definition(ACTION_DEFINITIONS[1]) created2 = db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'is_system', False, 'eq' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created2, fetched[0]) def test_filter_action_definitions_by_not_equal_value(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'is_system', False, 'neq' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_action_definitions_by_greater_than_value(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) created2 = db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gt' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(2, len(fetched)) self.assertIn(created1, fetched) self.assertIn(created2, fetched) def test_filter_action_definitions_by_greater_than_equal_value(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) created2 = db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gte' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(3, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) self._assert_single_item(fetched, name=created2['name']) def test_filter_action_definitions_by_less_than_value(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) created2 = db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'created_at', created2['created_at'], 'lt' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_action_definitions_by_less_than_equal_value(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) created2 = db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'created_at', created2['created_at'], 'lte' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(3, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) self._assert_single_item(fetched, name=created2['name']) def test_filter_action_definitions_by_values_in_list(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at'], created1['created_at']], 'in' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_action_definitions_by_values_notin_list(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) created2 = db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at'], created1['created_at']], 'nin' ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(1, len(fetched)) self._assert_single_item(fetched, name=created2['name']) def test_filter_action_definitions_by_multiple_columns(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) db_api.create_action_definition(ACTION_DEFINITIONS[2]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at'], created1['created_at']], 'in' ) _filter = filter_utils.create_or_update_filter( 'is_system', True, 'neq', _filter ) fetched = db_api.get_action_definitions(**_filter) self.assertEqual(0, len(fetched)) def test_filter_action_definitions_by_has_filter(self): db_api.create_action_definition(ACTION_DEFINITIONS[0]) db_api.create_action_definition(ACTION_DEFINITIONS[1]) created3 = db_api.create_action_definition(ACTION_DEFINITIONS[2]) f = filter_utils.create_or_update_filter('name', "3", 'has') fetched = db_api.get_action_definitions(**f) self.assertEqual(1, len(fetched)) self.assertEqual(created3, fetched[0]) f = filter_utils.create_or_update_filter('name', "action", 'has') fetched = db_api.get_action_definitions(**f) self.assertEqual(3, len(fetched)) def test_update_action_definition_with_name(self): created = db_api.create_action_definition(ACTION_DEFINITIONS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_action_definition( created.name, {'description': 'my new desc'} ) self.assertEqual('my new desc', updated.description) fetched = db_api.get_action_definition(created.name) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_update_action_definition_with_uuid(self): created = db_api.create_action_definition(ACTION_DEFINITIONS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_action_definition( created.id, {'description': 'my new desc'} ) self.assertEqual('my new desc', updated.description) fetched = db_api.get_action_definition(created.id) self.assertEqual(updated, fetched) def test_create_or_update_action_definition(self): name = 'not-existing-id' self.assertIsNone(db_api.load_action_definition(name)) created = db_api.create_or_update_action_definition( name, ACTION_DEFINITIONS[0] ) self.assertIsNotNone(created) self.assertIsNotNone(created.name) updated = db_api.create_or_update_action_definition( created.name, {'description': 'my new desc'} ) self.assertEqual('my new desc', updated.description) self.assertEqual( 'my new desc', db_api.load_action_definition(updated.name).description ) fetched = db_api.get_action_definition(created.name) self.assertEqual(updated, fetched) def test_get_action_definitions(self): created0 = db_api.create_action_definition(ACTION_DEFINITIONS[0]) created1 = db_api.create_action_definition(ACTION_DEFINITIONS[1]) fetched = db_api.get_action_definitions(is_system=True) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_delete_action_definition_with_name(self): created = db_api.create_action_definition(ACTION_DEFINITIONS[0]) fetched = db_api.get_action_definition(created.name) self.assertEqual(created, fetched) db_api.delete_action_definition(created.name) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_action_definition, created.name ) def test_delete_action_definition_with_uuid(self): created = db_api.create_action_definition(ACTION_DEFINITIONS[0]) fetched = db_api.get_action_definition(created.id) self.assertEqual(created, fetched) db_api.delete_action_definition(created.id) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_action_definition, created.id ) def test_action_definition_repr(self): s = db_api.create_action_definition(ACTION_DEFINITIONS[0]).__repr__() self.assertIn('ActionDefinition ', s) self.assertIn("'description': 'Action #1'", s) self.assertIn("'name': 'action1'", s) ACTION_EXECS = [ { 'spec': None, 'state': 'IDLE', 'state_info': "Running...", 'created_at': None, 'updated_at': None, 'task_id': None, 'tags': [], 'accepted': True, 'output': {"result": "value"} }, { 'spec': None, 'state': 'ERROR', 'state_info': "Failed due to some reason...", 'created_at': None, 'updated_at': None, 'task_id': None, 'tags': ['deployment'], 'accepted': False, 'output': {"result": "value"} } ] class ActionExecutionTest(SQLAlchemyTest): def test_create_and_get_and_load_action_execution(self): with db_api.transaction(): created = db_api.create_action_execution(ACTION_EXECS[0]) fetched = db_api.get_action_execution(created.id) self.assertEqual(created, fetched) fetched = db_api.load_action_execution(created.id) self.assertEqual(created, fetched) self.assertIsNone(db_api.load_action_execution("not-existing-id")) def test_update_action_execution(self): with db_api.transaction(): created = db_api.create_action_execution(ACTION_EXECS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_action_execution( created.id, {'state': 'RUNNING', 'state_info': "Running..."} ) self.assertEqual('RUNNING', updated.state) self.assertEqual( 'RUNNING', db_api.load_action_execution(updated.id).state ) fetched = db_api.get_action_execution(created.id) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_create_or_update_action_execution(self): id = 'not-existing-id' self.assertIsNone(db_api.load_action_execution(id)) created = db_api.create_or_update_action_execution(id, ACTION_EXECS[0]) self.assertIsNotNone(created) self.assertIsNotNone(created.id) with db_api.transaction(): updated = db_api.create_or_update_action_execution( created.id, {'state': 'RUNNING'} ) self.assertEqual('RUNNING', updated.state) self.assertEqual( 'RUNNING', db_api.load_action_execution(updated.id).state ) fetched = db_api.get_action_execution(created.id) self.assertEqual(updated, fetched) def test_get_action_executions(self): with db_api.transaction(): created0 = db_api.create_action_execution(WF_EXECS[0]) db_api.create_action_execution(ACTION_EXECS[1]) fetched = db_api.get_action_executions( state=WF_EXECS[0]['state'] ) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_delete_action_execution(self): with db_api.transaction(): created = db_api.create_action_execution(ACTION_EXECS[0]) fetched = db_api.get_action_execution(created.id) self.assertEqual(created, fetched) db_api.delete_action_execution(created.id) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_action_execution, created.id ) def test_delete_other_tenant_action_execution(self): created = db_api.create_action_execution(ACTION_EXECS[0]) # Create a new user. auth_context.set_ctx(USER_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.delete_action_execution, created.id ) def test_trim_status_info(self): created = db_api.create_action_execution(ACTION_EXECS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_action_execution( created.id, {'state': 'FAILED', 'state_info': ".." * 65536} ) self.assertEqual('FAILED', updated.state) state_info = db_api.load_action_execution(updated.id).state_info self.assertEqual( 65503, len(state_info) ) def test_action_execution_repr(self): s = db_api.create_action_execution(ACTION_EXECS[0]).__repr__() self.assertIn('ActionExecution ', s) self.assertIn("'state': 'IDLE'", s) self.assertIn("'state_info': 'Running...'", s) self.assertIn("'accepted': True", s) WF_EXECS = [ { 'spec': {}, 'start_params': {'task': 'my_task1'}, 'state': 'IDLE', 'state_info': "Running...", 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0), 'updated_at': None, 'context': None, 'task_id': None, 'trust_id': None, 'description': None, 'output': None }, { 'spec': {}, 'start_params': {'task': 'my_task1'}, 'state': 'RUNNING', 'state_info': "Running...", 'created_at': datetime.datetime(2016, 12, 1, 15, 1, 0), 'updated_at': None, 'context': {'image_id': '123123'}, 'task_id': None, 'trust_id': None, 'description': None, 'output': None } ] class WorkflowExecutionTest(SQLAlchemyTest): def test_create_and_get_and_load_workflow_execution(self): with db_api.transaction(): created = db_api.create_workflow_execution(WF_EXECS[0]) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(created, fetched) fetched = db_api.load_workflow_execution(created.id) self.assertEqual(created, fetched) self.assertIsNone( db_api.load_workflow_execution("not-existing-id") ) def test_update_workflow_execution(self): with db_api.transaction(): created = db_api.create_workflow_execution(WF_EXECS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_workflow_execution( created.id, {'state': 'RUNNING', 'state_info': "Running..."} ) self.assertEqual('RUNNING', updated.state) self.assertEqual( 'RUNNING', db_api.load_workflow_execution(updated.id).state ) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_update_workflow_execution_by_admin(self): with db_api.transaction(): created = db_api.create_workflow_execution(WF_EXECS[0]) auth_context.set_ctx(ADM_CTX) updated = db_api.update_workflow_execution( created.id, {'state': 'RUNNING', 'state_info': "Running..."} ) auth_context.set_ctx(DEFAULT_CTX) self.assertEqual('RUNNING', updated.state) self.assertEqual( 'RUNNING', db_api.load_workflow_execution(updated.id).state ) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_update_workflow_execution_by_others_fail(self): with db_api.transaction(): created = db_api.create_workflow_execution(WF_EXECS[0]) auth_context.set_ctx(USER_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.update_workflow_execution, created.id, {'state': 'RUNNING', 'state_info': "Running..."} ) def test_create_or_update_workflow_execution(self): id = 'not-existing-id' self.assertIsNone(db_api.load_workflow_execution(id)) with db_api.transaction(): created = db_api.create_or_update_workflow_execution( id, WF_EXECS[0] ) self.assertIsNotNone(created) self.assertIsNotNone(created.id) updated = db_api.create_or_update_workflow_execution( created.id, {'state': 'RUNNING'} ) self.assertEqual('RUNNING', updated.state) self.assertEqual( 'RUNNING', db_api.load_workflow_execution(updated.id).state ) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(updated, fetched) def test_get_workflow_executions(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) db_api.create_workflow_execution(WF_EXECS[1]) fetched = db_api.get_workflow_executions( state=WF_EXECS[0]['state'] ) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_workflow_execution_by_equal_value(self): with db_api.transaction(): db_api.create_workflow_execution(WF_EXECS[0]) created = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'id', created.id, 'eq' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created, fetched[0]) def test_filter_workflow_execution_by_not_equal_value(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'id', created0.id, 'neq' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workflow_execution_by_greater_than_value(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gt' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workflow_execution_by_greater_than_equal_value(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gte' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, state=created0['state']) self._assert_single_item(fetched, state=created1['state']) def test_filter_workflow_execution_by_less_than_value(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lt' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_workflow_execution_by_less_than_equal_value(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lte' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, state=created0['state']) self._assert_single_item(fetched, state=created1['state']) def test_filter_workflow_execution_by_values_in_list(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at']], 'in' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_workflow_execution_by_values_notin_list(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at']], 'nin' ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_workflow_execution_by_multiple_columns(self): with db_api.transaction(): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at'], created1['created_at']], 'in' ) _filter = filter_utils.create_or_update_filter( 'id', created1.id, 'eq', _filter ) fetched = db_api.get_workflow_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_delete_workflow_execution(self): with db_api.transaction(): created = db_api.create_workflow_execution(WF_EXECS[0]) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(created, fetched) db_api.delete_workflow_execution(created.id) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workflow_execution, created.id ) def test_delete_workflow_execution_by_admin(self): with db_api.transaction(): created = db_api.create_workflow_execution(WF_EXECS[0]) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(created, fetched) auth_context.set_ctx(ADM_CTX) db_api.delete_workflow_execution(created.id) auth_context.set_ctx(DEFAULT_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workflow_execution, created.id ) def test_delete_workflow_execution_by_other_fail(self): created = db_api.create_workflow_execution(WF_EXECS[0]) auth_context.set_ctx(USER_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.delete_workflow_execution, created.id ) def test_trim_status_info(self): created = db_api.create_workflow_execution(WF_EXECS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_workflow_execution( created.id, {'state': 'FAILED', 'state_info': ".." * 65536} ) self.assertEqual('FAILED', updated.state) state_info = db_api.load_workflow_execution(updated.id).state_info self.assertEqual( 65503, len(state_info) ) def test_task_executions(self): # Add an associated object into collection. with db_api.transaction(): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) self.assertEqual(0, len(wf_ex.task_executions)) wf_ex.task_executions.append( db_models.TaskExecution(**TASK_EXECS[0]) ) # Make sure task execution has been saved. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertIsNotNone(wf_ex) self.assertEqual(1, len(wf_ex.task_executions)) task_ex = wf_ex.task_executions[0] self.assertEqual(TASK_EXECS[0]['name'], task_ex.name) self.assertEqual(1, len(db_api.get_workflow_executions())) self.assertEqual(1, len(db_api.get_task_executions())) # Remove task execution from collection. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) del wf_ex.task_executions[:] # Make sure task execution has been removed. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(0, len(wf_ex.task_executions)) self.assertIsNone(db_api.load_task_execution(task_ex.id)) def test_workflow_execution_repr(self): s = db_api.create_workflow_execution(WF_EXECS[0]).__repr__() self.assertIn('Execution ', s) self.assertIn("'context': None", s) self.assertIn("'state': 'IDLE'", s) TASK_EXECS = [ { 'workflow_execution_id': '1', 'workflow_name': 'my_wb.my_wf', 'name': 'my_task1', 'spec': None, 'action_spec': None, 'state': 'IDLE', 'tags': ['deployment'], 'in_context': None, 'runtime_context': None, 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0), 'updated_at': None }, { 'workflow_execution_id': '1', 'workflow_name': 'my_wb.my_wf', 'name': 'my_task2', 'spec': None, 'action_spec': None, 'state': 'IDLE', 'tags': ['deployment'], 'in_context': {'image_id': '123123'}, 'runtime_context': None, 'created_at': datetime.datetime(2016, 12, 1, 15, 1, 0), 'updated_at': None }, ] class TaskExecutionTest(SQLAlchemyTest): def test_create_and_get_and_load_task_execution(self): with db_api.transaction(): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) created = db_api.create_task_execution(values) fetched = db_api.get_task_execution(created.id) self.assertEqual(created, fetched) self.assertNotIsInstance(fetched.workflow_execution, list) fetched = db_api.load_task_execution(created.id) self.assertEqual(created, fetched) self.assertIsNone(db_api.load_task_execution("not-existing-id")) def test_action_executions(self): # Store one task with two invocations. with db_api.transaction(): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) task = db_api.create_task_execution(values) self.assertEqual(0, len(task.action_executions)) self.assertEqual(0, len(task.workflow_executions)) a_ex1 = db_models.ActionExecution() a_ex2 = db_models.ActionExecution() task.action_executions.append(a_ex1) task.action_executions.append(a_ex2) self.assertEqual(2, len(task.action_executions)) self.assertEqual(0, len(task.workflow_executions)) # Make sure associated objects were saved. with db_api.transaction(): task = db_api.get_task_execution(task.id) self.assertEqual(2, len(task.action_executions)) self.assertNotIsInstance( task.action_executions[0].task_execution, list ) # Remove associated objects from collection. with db_api.transaction(): task = db_api.get_task_execution(task.id) del task.action_executions[:] # Make sure associated objects were deleted. with db_api.transaction(): task = db_api.get_task_execution(task.id) self.assertEqual(0, len(task.action_executions)) def test_update_task_execution(self): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) created = db_api.create_task_execution(values) self.assertIsNone(created.updated_at) updated = db_api.update_task_execution( created.id, {'workflow_name': 'new_wf'} ) self.assertEqual('new_wf', updated.workflow_name) fetched = db_api.get_task_execution(created.id) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_create_or_update_task_execution(self): id = 'not-existing-id' self.assertIsNone(db_api.load_task_execution(id)) wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) created = db_api.create_or_update_task_execution(id, values) self.assertIsNotNone(created) self.assertIsNotNone(created.id) updated = db_api.create_or_update_task_execution( created.id, {'state': 'RUNNING'} ) self.assertEqual('RUNNING', updated.state) self.assertEqual( 'RUNNING', db_api.load_task_execution(updated.id).state ) fetched = db_api.get_task_execution(created.id) self.assertEqual(updated, fetched) def test_get_task_executions(self): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) created0 = db_api.create_task_execution(values) values = copy.deepcopy(TASK_EXECS[1]) values.update({'workflow_execution_id': wf_ex.id}) created1 = db_api.create_task_execution(values) fetched = db_api.get_task_executions( workflow_name=TASK_EXECS[0]['workflow_name'] ) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_task_execution_by_equal_value(self): created, _ = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'name', created.name, 'eq' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created, fetched[0]) def test_filter_task_execution_by_not_equal_value(self): created0, created1 = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'name', created0.name, 'neq' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_task_execution_by_greater_than_value(self): created0, created1 = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gt' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_task_execution_by_greater_than_equal_value(self): created0, created1 = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'created_at', created0['created_at'], 'gte' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_task_execution_by_less_than_value(self): created0, created1 = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lt' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_filter_task_execution_by_less_than_equal_value(self): created0, created1 = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'created_at', created1['created_at'], 'lte' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_filter_task_execution_by_values_in_list(self): created, _ = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'created_at', [created['created_at']], 'in' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created, fetched[0]) def test_filter_task_execution_by_values_notin_list(self): created0, created1 = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at']], 'nin' ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_filter_task_execution_by_multiple_columns(self): created0, created1 = self._create_task_executions() _filter = filter_utils.create_or_update_filter( 'created_at', [created0['created_at'], created1['created_at']], 'in' ) _filter = filter_utils.create_or_update_filter( 'name', created1.name, 'eq', _filter ) fetched = db_api.get_task_executions(**_filter) self.assertEqual(1, len(fetched)) self.assertEqual(created1, fetched[0]) def test_delete_task_execution(self): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) created = db_api.create_task_execution(values) fetched = db_api.get_task_execution(created.id) self.assertEqual(created, fetched) db_api.delete_task_execution(created.id) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_task_execution, created.id ) def test_get_incomplete_task_executions(self): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) values['state'] = 'RUNNING' task_ex1 = db_api.create_task_execution(values) task_execs = db_api.get_incomplete_task_executions( workflow_execution_id=wf_ex.id ) self.assertEqual(1, len(task_execs)) self.assertEqual(task_ex1, task_execs[0]) self.assertEqual( 1, db_api.get_incomplete_task_executions_count( workflow_execution_id=wf_ex.id ) ) # Add one more task. values = copy.deepcopy(TASK_EXECS[1]) values.update({'workflow_execution_id': wf_ex.id}) values['state'] = 'SUCCESS' db_api.create_task_execution(values) # It should be still one incompleted task. task_execs = db_api.get_incomplete_task_executions( workflow_execution_id=wf_ex.id ) self.assertEqual(1, len(task_execs)) self.assertEqual(task_ex1, task_execs[0]) self.assertEqual( 1, db_api.get_incomplete_task_executions_count( workflow_execution_id=wf_ex.id ) ) def test_task_execution_repr(self): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) s = db_api.create_task_execution(values).__repr__() self.assertIn('TaskExecution ', s) self.assertIn("'state': 'IDLE'", s) self.assertIn("'name': 'my_task1'", s) def _create_task_executions(self): wf_ex = db_api.create_workflow_execution(WF_EXECS[0]) values = copy.deepcopy(TASK_EXECS[0]) values.update({'workflow_execution_id': wf_ex.id}) created0 = db_api.create_task_execution(values) values = copy.deepcopy(TASK_EXECS[1]) values.update({'workflow_execution_id': wf_ex.id}) created1 = db_api.create_task_execution(values) return created0, created1 CRON_TRIGGERS = [ { 'id': '11111111-1111-1111-1111-111111111111', 'name': 'trigger1', 'pattern': '* * * * *', 'workflow_name': 'my_wf', 'workflow_id': None, 'workflow_input': {}, 'next_execution_time': datetime.datetime.now() + datetime.timedelta(days=1), 'remaining_executions': 42, 'scope': 'private', 'project_id': '' }, { 'id': '22222222-2222-2222-2222-2222222c2222', 'name': 'trigger2', 'pattern': '* * * * *', 'workflow_name': 'my_wf', 'workflow_id': None, 'workflow_input': {'param': 'val'}, 'next_execution_time': datetime.datetime.now() + datetime.timedelta(days=1), 'remaining_executions': 42, 'scope': 'private', 'project_id': '' }, ] class CronTriggerTest(SQLAlchemyTest): def setUp(self): super(CronTriggerTest, self).setUp() self.wf = db_api.create_workflow_definition({'name': 'my_wf'}) for ct in CRON_TRIGGERS: ct['workflow_id'] = self.wf.id def test_create_and_get_and_load_cron_trigger(self): created = db_api.create_cron_trigger(CRON_TRIGGERS[0]) fetched = db_api.get_cron_trigger(created.name) self.assertEqual(created, fetched) fetched = db_api.load_cron_trigger(created.name) self.assertEqual(created, fetched) self.assertIsNone(db_api.load_cron_trigger("not-existing-trigger")) def test_create_cron_trigger_duplicate_without_auth(self): cfg.CONF.set_default('auth_enable', False, group='pecan') db_api.create_cron_trigger(CRON_TRIGGERS[0]) self.assertRaises( exc.DBDuplicateEntryError, db_api.create_cron_trigger, CRON_TRIGGERS[0] ) def test_update_cron_trigger(self): created = db_api.create_cron_trigger(CRON_TRIGGERS[0]) self.assertIsNone(created.updated_at) updated, updated_count = db_api.update_cron_trigger( created.name, {'pattern': '*/1 * * * *'} ) self.assertEqual('*/1 * * * *', updated.pattern) self.assertEqual(1, updated_count) fetched = db_api.get_cron_trigger(created.name) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) # Test update_cron_trigger and query_filter with results updated, updated_count = db_api.update_cron_trigger( created.name, {'pattern': '*/1 * * * *'}, query_filter={'name': created.name} ) self.assertEqual(updated, fetched) self.assertEqual(1, updated_count) # Test update_cron_trigger and query_filter without results updated, updated_count = db_api.update_cron_trigger( created.name, {'pattern': '*/1 * * * *'}, query_filter={'name': 'not-existing-id'} ) self.assertEqual(updated, updated) self.assertEqual(0, updated_count) def test_update_cron_trigger_by_id(self): created = db_api.create_cron_trigger(CRON_TRIGGERS[0]) self.assertIsNone(created.updated_at) updated, updated_count = db_api.update_cron_trigger( created.id, {'pattern': '*/1 * * * *'} ) self.assertEqual('*/1 * * * *', updated.pattern) self.assertEqual(1, updated_count) def test_create_or_update_cron_trigger(self): name = 'not-existing-id' self.assertIsNone(db_api.load_cron_trigger(name)) created = db_api.create_or_update_cron_trigger(name, CRON_TRIGGERS[0]) self.assertIsNotNone(created) self.assertIsNotNone(created.name) updated = db_api.create_or_update_cron_trigger( created.name, {'pattern': '*/1 * * * *'} ) self.assertEqual('*/1 * * * *', updated.pattern) fetched = db_api.get_cron_trigger(created.name) self.assertEqual(updated, fetched) def test_get_cron_triggers(self): created0 = db_api.create_cron_trigger(CRON_TRIGGERS[0]) created1 = db_api.create_cron_trigger(CRON_TRIGGERS[1]) fetched = db_api.get_cron_triggers(pattern='* * * * *') self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_get_cron_trigger(self): cron_trigger = db_api.create_cron_trigger(CRON_TRIGGERS[0]) # Get by id is ok fetched = db_api.get_cron_trigger(cron_trigger.id) self.assertEqual(cron_trigger, fetched) # Get by name is ok fetched = db_api.get_cron_trigger(cron_trigger.name) self.assertEqual(cron_trigger, fetched) def test_get_cron_trigger_not_found(self): self.assertRaises( exc.DBEntityNotFoundError, db_api.get_cron_trigger, 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_cron_trigger, 'not-exists-cron-trigger', ) def test_get_cron_trigger_by_id(self): cron_trigger_1 = db_api.create_cron_trigger(CRON_TRIGGERS[0]) cron_trigger_2 = db_api.create_cron_trigger(CRON_TRIGGERS[1]) fetched = db_api.get_cron_trigger_by_id(cron_trigger_1.id) self.assertEqual(cron_trigger_1, fetched) fetched = db_api.get_cron_trigger_by_id(cron_trigger_2.id) self.assertEqual(cron_trigger_2, fetched) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_cron_trigger_by_id, 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', ) def test_get_cron_triggers_other_tenant(self): created0 = db_api.create_cron_trigger(CRON_TRIGGERS[0]) # Switch to another tenant. auth_context.set_ctx(USER_CTX) fetched = db_api.get_cron_triggers( insecure=True, pattern='* * * * *', project_id=security.DEFAULT_PROJECT_ID ) self.assertEqual(1, len(fetched)) self.assertEqual(created0, fetched[0]) def test_delete_cron_trigger(self): created = db_api.create_cron_trigger(CRON_TRIGGERS[0]) fetched = db_api.get_cron_trigger(created.name) self.assertEqual(created, fetched) rowcount = db_api.delete_cron_trigger(created.name) self.assertEqual(1, rowcount) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_cron_trigger, created.name ) def test_delete_cron_trigger_by_id(self): created = db_api.create_cron_trigger(CRON_TRIGGERS[0]) fetched = db_api.get_cron_trigger(created.name) self.assertEqual(created, fetched) rowcount = db_api.delete_cron_trigger(created.id) self.assertEqual(1, rowcount) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_cron_trigger, created.id ) def test_cron_trigger_repr(self): s = db_api.create_cron_trigger(CRON_TRIGGERS[0]).__repr__() self.assertIn('CronTrigger ', s) self.assertIn("'pattern': '* * * * *'", s) self.assertIn("'name': 'trigger1'", s) ENVIRONMENTS = [ { 'name': 'env1', 'description': 'Test Environment #1', 'scope': 'private', 'variables': { 'server': 'localhost', 'database': 'test', 'timeout': 600, 'verbose': True } }, { 'name': 'env2', 'description': 'Test Environment #2', 'scope': 'public', 'variables': { 'server': '127.0.0.1', 'database': 'temp', 'timeout': 300, 'verbose': False } } ] class EnvironmentTest(SQLAlchemyTest): def setUp(self): super(EnvironmentTest, self).setUp() db_api.delete_environments() def test_create_and_get_and_load_environment(self): created = db_api.create_environment(ENVIRONMENTS[0]) fetched = db_api.get_environment(created.name) self.assertEqual(created, fetched) fetched = db_api.load_environment(created.name) self.assertEqual(created, fetched) self.assertIsNone(db_api.load_environment("not-existing-id")) def test_create_environment_duplicate_without_auth(self): cfg.CONF.set_default('auth_enable', False, group='pecan') db_api.create_environment(ENVIRONMENTS[0]) self.assertRaises( exc.DBDuplicateEntryError, db_api.create_environment, ENVIRONMENTS[0] ) def test_update_environment(self): created = db_api.create_environment(ENVIRONMENTS[0]) self.assertIsNone(created.updated_at) updated = db_api.update_environment( created.name, {'description': 'my new desc'} ) self.assertEqual('my new desc', updated.description) fetched = db_api.get_environment(created.name) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_create_or_update_environment(self): name = 'not-existing-id' self.assertIsNone(db_api.load_environment(name)) created = db_api.create_or_update_environment(name, ENVIRONMENTS[0]) self.assertIsNotNone(created) self.assertIsNotNone(created.name) updated = db_api.create_or_update_environment( created.name, {'description': 'my new desc'} ) self.assertEqual('my new desc', updated.description) self.assertEqual( 'my new desc', db_api.load_environment(updated.name).description ) fetched = db_api.get_environment(created.name) self.assertEqual(updated, fetched) def test_get_environments(self): created0 = db_api.create_environment(ENVIRONMENTS[0]) created1 = db_api.create_environment(ENVIRONMENTS[1]) fetched = db_api.get_environments() self.assertEqual(2, len(fetched)) self._assert_single_item(fetched, name=created0['name']) self._assert_single_item(fetched, name=created1['name']) def test_delete_environment(self): created = db_api.create_environment(ENVIRONMENTS[0]) fetched = db_api.get_environment(created.name) self.assertEqual(created, fetched) db_api.delete_environment(created.name) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_environment, created.name ) def test_environment_repr(self): s = db_api.create_environment(ENVIRONMENTS[0]).__repr__() self.assertIn('Environment ', s) self.assertIn("'description': 'Test Environment #1'", s) self.assertIn("'name': 'env1'", s) class TXTest(SQLAlchemyTest): def test_rollback(self): db_api.start_tx() try: created = db_api.create_workbook(WORKBOOKS[0]) fetched = db_api.get_workbook(created.name) self.assertEqual(created, fetched) self.assertTrue(self.is_db_session_open()) db_api.rollback_tx() finally: db_api.end_tx() self.assertFalse(self.is_db_session_open()) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workbook, created['id'] ) self.assertFalse(self.is_db_session_open()) def test_commit(self): db_api.start_tx() try: created = db_api.create_workbook(WORKBOOKS[0]) fetched = db_api.get_workbook(created.name) self.assertEqual(created, fetched) self.assertTrue(self.is_db_session_open()) db_api.commit_tx() finally: db_api.end_tx() self.assertFalse(self.is_db_session_open()) fetched = db_api.get_workbook(created.name) self.assertEqual(created, fetched) self.assertFalse(self.is_db_session_open()) def test_commit_transaction(self): with db_api.transaction(): created = db_api.create_workbook(WORKBOOKS[0]) fetched = db_api.get_workbook(created.name) self.assertEqual(created, fetched) self.assertTrue(self.is_db_session_open()) self.assertFalse(self.is_db_session_open()) fetched = db_api.get_workbook(created.name) self.assertEqual(created, fetched) self.assertFalse(self.is_db_session_open()) def test_rollback_multiple_objects(self): db_api.start_tx() try: created = db_api.create_workflow_execution(WF_EXECS[0]) fetched = db_api.get_workflow_execution(created['id']) self.assertEqual(created, fetched) created_wb = db_api.create_workbook(WORKBOOKS[0]) fetched_wb = db_api.get_workbook(created_wb.name) self.assertEqual(created_wb, fetched_wb) self.assertTrue(self.is_db_session_open()) db_api.rollback_tx() finally: db_api.end_tx() self.assertFalse(self.is_db_session_open()) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workflow_execution, created.id ) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workbook, created_wb.name ) self.assertFalse(self.is_db_session_open()) def test_rollback_transaction(self): try: with db_api.transaction(): created = db_api.create_workbook(WORKBOOKS[0]) fetched = db_api.get_workbook(created.name) self.assertEqual(created, fetched) self.assertTrue(self.is_db_session_open()) db_api.create_workbook(WORKBOOKS[0]) except exc.DBDuplicateEntryError: pass self.assertFalse(self.is_db_session_open()) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workbook, created.name ) def test_commit_multiple_objects(self): db_api.start_tx() try: created = db_api.create_workflow_execution(WF_EXECS[0]) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(created, fetched) created_wb = db_api.create_workbook(WORKBOOKS[0]) fetched_wb = db_api.get_workbook(created_wb.name) self.assertEqual(created_wb, fetched_wb) self.assertTrue(self.is_db_session_open()) db_api.commit_tx() finally: db_api.end_tx() self.assertFalse(self.is_db_session_open()) with db_api.transaction(): fetched = db_api.get_workflow_execution(created.id) self.assertEqual(created, fetched) fetched_wb = db_api.get_workbook(created_wb.name) self.assertEqual(created_wb, fetched_wb) self.assertFalse(self.is_db_session_open()) RESOURCE_MEMBERS = [ { 'resource_id': '123e4567-e89b-12d3-a456-426655440000', 'resource_type': 'workflow', 'project_id': security.get_project_id(), 'member_id': USER_CTX.project_id, 'status': 'pending', }, { 'resource_id': '123e4567-e89b-12d3-a456-426655440000', 'resource_type': 'workflow', 'project_id': security.get_project_id(), 'member_id': '111', 'status': 'pending', }, ] class ResourceMemberTest(SQLAlchemyTest): def test_create_and_get_resource_member(self): created_1 = db_api.create_resource_member(RESOURCE_MEMBERS[0]) created_2 = db_api.create_resource_member(RESOURCE_MEMBERS[1]) fetched = db_api.get_resource_member( '123e4567-e89b-12d3-a456-426655440000', 'workflow', USER_CTX.project_id ) self.assertEqual(created_1, fetched) # Switch to another tenant. auth_context.set_ctx(USER_CTX) fetched = db_api.get_resource_member( '123e4567-e89b-12d3-a456-426655440000', 'workflow', USER_CTX.project_id ) self.assertEqual(created_1, fetched) # Tenant A can not see membership of resource shared to Tenant B. self.assertRaises( exc.DBEntityNotFoundError, db_api.get_resource_member, '123e4567-e89b-12d3-a456-426655440000', 'workflow', created_2.member_id ) def test_create_resource_member_duplicate(self): db_api.create_resource_member(RESOURCE_MEMBERS[0]) self.assertRaises( exc.DBDuplicateEntryError, db_api.create_resource_member, RESOURCE_MEMBERS[0] ) def test_get_resource_members_by_owner(self): for res_member in RESOURCE_MEMBERS: db_api.create_resource_member(res_member) fetched = db_api.get_resource_members( '123e4567-e89b-12d3-a456-426655440000', 'workflow', ) self.assertTrue(2, len(fetched)) def test_get_resource_members_not_owner(self): created = db_api.create_resource_member(RESOURCE_MEMBERS[0]) db_api.create_resource_member(RESOURCE_MEMBERS[1]) # Switch to another tenant. auth_context.set_ctx(USER_CTX) fetched = db_api.get_resource_members( created.resource_id, 'workflow', ) self.assertTrue(1, len(fetched)) self.assertEqual(created, fetched[0]) def test_update_resource_member_by_member(self): created = db_api.create_resource_member(RESOURCE_MEMBERS[0]) # Switch to another tenant. auth_context.set_ctx(USER_CTX) updated = db_api.update_resource_member( created.resource_id, 'workflow', USER_CTX.project_id, {'status': 'accepted'} ) self.assertEqual(created.id, updated.id) self.assertEqual('accepted', updated.status) def test_update_resource_member_by_owner(self): created = db_api.create_resource_member(RESOURCE_MEMBERS[0]) self.assertRaises( exc.DBEntityNotFoundError, db_api.update_resource_member, created.resource_id, 'workflow', USER_CTX.project_id, {'status': 'accepted'} ) def test_delete_resource_member(self): created = db_api.create_resource_member(RESOURCE_MEMBERS[0]) db_api.delete_resource_member( created.resource_id, 'workflow', USER_CTX.project_id, ) fetched = db_api.get_resource_members( created.resource_id, 'workflow', ) self.assertEqual(0, len(fetched)) def test_delete_resource_member_not_owner(self): created = db_api.create_resource_member(RESOURCE_MEMBERS[0]) # Switch to another tenant. auth_context.set_ctx(USER_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.delete_resource_member, created.resource_id, 'workflow', USER_CTX.project_id, ) def test_delete_resource_member_already_deleted(self): created = db_api.create_resource_member(RESOURCE_MEMBERS[0]) db_api.delete_resource_member( created.resource_id, 'workflow', USER_CTX.project_id, ) self.assertRaises( exc.DBEntityNotFoundError, db_api.delete_resource_member, created.resource_id, 'workflow', USER_CTX.project_id, ) def test_delete_nonexistent_resource_member(self): self.assertRaises( exc.DBEntityNotFoundError, db_api.delete_resource_member, 'nonexitent_resource', 'workflow', 'nonexitent_member', ) class WorkflowSharingTest(SQLAlchemyTest): def test_get_shared_workflow(self): wf = db_api.create_workflow_definition(WF_DEFINITIONS[1]) # Switch to another tenant. auth_context.set_ctx(USER_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workflow_definition, wf.id ) # Switch to original tenant, share workflow to another tenant. auth_context.set_ctx(DEFAULT_CTX) workflow_sharing = { 'resource_id': wf.id, 'resource_type': 'workflow', 'project_id': security.get_project_id(), 'member_id': USER_CTX.project_id, 'status': 'pending', } db_api.create_resource_member(workflow_sharing) # Switch to another tenant, accept the sharing, get workflows. auth_context.set_ctx(USER_CTX) db_api.update_resource_member( wf.id, 'workflow', USER_CTX.project_id, {'status': 'accepted'} ) fetched = db_api.get_workflow_definition(wf.id) self.assertEqual(wf, fetched) def test_owner_delete_shared_workflow(self): wf = db_api.create_workflow_definition(WF_DEFINITIONS[1]) workflow_sharing = { 'resource_id': wf.id, 'resource_type': 'workflow', 'project_id': security.get_project_id(), 'member_id': USER_CTX.project_id, 'status': 'pending', } db_api.create_resource_member(workflow_sharing) # Switch to another tenant, accept the sharing. auth_context.set_ctx(USER_CTX) db_api.update_resource_member( wf.id, 'workflow', USER_CTX.project_id, {'status': 'accepted'} ) fetched = db_api.get_workflow_definition(wf.id) self.assertEqual(wf, fetched) # Switch to original tenant, delete the workflow. auth_context.set_ctx(DEFAULT_CTX) db_api.delete_workflow_definition(wf.id) # Switch to another tenant, can not see that workflow. auth_context.set_ctx(USER_CTX) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_workflow_definition, wf.id ) def test_owner_delete_shared_workflow_has_crontrigger(self): wf = db_api.create_workflow_definition(WF_DEFINITIONS[1]) workflow_sharing = { 'resource_id': wf.id, 'resource_type': 'workflow', 'project_id': security.get_project_id(), 'member_id': USER_CTX.project_id, 'status': 'pending', } db_api.create_resource_member(workflow_sharing) # Switch to another tenant, accept the sharing. auth_context.set_ctx(USER_CTX) db_api.update_resource_member( wf.id, 'workflow', USER_CTX.project_id, {'status': 'accepted'} ) # Create cron trigger using the shared workflow. CRON_TRIGGERS[0]['workflow_id'] = wf.id db_api.create_cron_trigger(CRON_TRIGGERS[0]) # Switch to original tenant, try to delete the workflow. auth_context.set_ctx(DEFAULT_CTX) self.assertRaises( exc.DBError, db_api.delete_workflow_definition, wf.id ) EVENT_TRIGGERS = [ { 'name': 'trigger1', 'workflow_id': '', 'workflow_input': {}, 'workflow_params': {}, 'exchange': 'openstack', 'topic': 'notification', 'event': 'compute.create_instance', }, { 'name': 'trigger2', 'workflow_id': '', 'workflow_input': {}, 'workflow_params': {}, 'exchange': 'openstack', 'topic': 'notification', 'event': 'compute.delete_instance', }, ] class EventTriggerTest(SQLAlchemyTest): def setUp(self): super(EventTriggerTest, self).setUp() self.wf = db_api.create_workflow_definition({'name': 'my_wf'}) for et in EVENT_TRIGGERS: et['workflow_id'] = self.wf.id def test_create_and_get_event_trigger(self): created = db_api.create_event_trigger(EVENT_TRIGGERS[0]) fetched = db_api.get_event_trigger(created.id) self.assertEqual(created, fetched) def test_get_event_triggers_not_insecure(self): for t in EVENT_TRIGGERS: db_api.create_event_trigger(t) fetched = db_api.get_event_triggers() self.assertEqual(2, len(fetched)) def test_get_event_triggers_insecure(self): db_api.create_event_trigger(EVENT_TRIGGERS[0]) # Switch to another tenant. auth_context.set_ctx(USER_CTX) db_api.create_event_trigger(EVENT_TRIGGERS[1]) fetched = db_api.get_event_triggers() self.assertEqual(1, len(fetched)) fetched = db_api.get_event_triggers(insecure=True) self.assertEqual(2, len(fetched)) def test_update_event_trigger(self): created = db_api.create_event_trigger(EVENT_TRIGGERS[0]) # Need a new existing workflow for updating event trigger because of # foreign constraint. new_wf = db_api.create_workflow_definition({'name': 'my_wf1'}) db_api.update_event_trigger( created.id, {'workflow_id': new_wf.id} ) updated = db_api.get_event_trigger(created.id) self.assertEqual(new_wf.id, updated.workflow_id) def test_delete_event_triggers(self): created = db_api.create_event_trigger(EVENT_TRIGGERS[0]) db_api.delete_event_trigger(created.id) self.assertRaises( exc.DBEntityNotFoundError, db_api.get_event_trigger, created.id ) class LockTest(SQLAlchemyTest): def test_create_lock(self): # This test just ensures that DB model is OK. # It doesn't test the real intention of this model though. db_api.create_named_lock('lock1') locks = db_api.get_named_locks() self.assertEqual(1, len(locks)) self.assertEqual('lock1', locks[0].name) db_api.delete_named_lock('invalid_lock_id') locks = db_api.get_named_locks() self.assertEqual(1, len(locks)) db_api.delete_named_lock(locks[0].id) locks = db_api.get_named_locks() self.assertEqual(0, len(locks)) def test_with_named_lock(self): name = 'lock1' with db_api.named_lock(name): # Make sure that within 'with' section the lock record exists. self.assertEqual(1, len(db_api.get_named_locks())) # Make sure that outside 'with' section the lock record does not exist. self.assertEqual(0, len(db_api.get_named_locks())) mistral-6.0.0/mistral/tests/unit/db/v2/test_transactions.py0000666000175100017510000000437613245513261024063 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.tests.unit import base as test_base WF_EXECS = [ { 'name': '1', 'spec': {}, 'start_params': {}, 'state': 'RUNNING', 'state_info': "Running...", 'created_at': None, 'updated_at': None, 'context': None, 'task_id': None, 'trust_id': None }, { 'name': '1', 'spec': {}, 'start_params': {}, 'state': 'RUNNING', 'state_info': "Running...", 'created_at': None, 'updated_at': None, 'context': None, 'task_id': None, 'trust_id': None } ] class TransactionsTest(test_base.DbTestCase): def setUp(self): super(TransactionsTest, self).setUp() cfg.CONF.set_default('auth_enable', True, group='pecan') self.addCleanup( cfg.CONF.set_default, 'auth_enable', False, group='pecan' ) def test_read_only_transactions(self): with db_api.transaction(): db_api.create_workflow_execution(WF_EXECS[0]) wf_execs = db_api.get_workflow_executions() self.assertEqual(1, len(wf_execs)) wf_execs = db_api.get_workflow_executions() self.assertEqual(1, len(wf_execs)) with db_api.transaction(read_only=True): db_api.create_workflow_execution(WF_EXECS[1]) wf_execs = db_api.get_workflow_executions() self.assertEqual(2, len(wf_execs)) wf_execs = db_api.get_workflow_executions() self.assertEqual(1, len(wf_execs)) mistral-6.0.0/mistral/tests/unit/db/v2/test_locking.py0000666000175100017510000000666713245513261023006 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet from oslo_config import cfg import random import testtools from mistral import context as auth_context from mistral.db.sqlalchemy import sqlite_lock from mistral.db.v2.sqlalchemy import api as db_api from mistral.db.v2.sqlalchemy import models as db_models from mistral.tests.unit import base as test_base WF_EXEC = { 'name': '1', 'spec': {}, 'start_params': {}, 'state': 'RUNNING', 'state_info': "Running...", 'created_at': None, 'updated_at': None, 'context': None, 'task_id': None, 'trust_id': None } @testtools.skipIf( 'sqlite' not in cfg.CONF.database.connection, 'Not using SQLite for DB backend.') class SQLiteLocksTest(test_base.DbTestCase): def setUp(self): super(SQLiteLocksTest, self).setUp() cfg.CONF.set_default('auth_enable', True, group='pecan') self.addCleanup( cfg.CONF.set_default, 'auth_enable', False, group='pecan' ) def _random_sleep(self): eventlet.sleep(random.Random().randint(0, 10) * 0.001) def _run_acquire_release_sqlite_lock(self, obj_id, session): self._random_sleep() sqlite_lock.acquire_lock(obj_id, session) self._random_sleep() sqlite_lock.release_locks(session) def test_acquire_release_sqlite_lock(self): threads = [] id = "object_id" number = 500 for i in range(1, number): threads.append( eventlet.spawn(self._run_acquire_release_sqlite_lock, id, i) ) [t.wait() for t in threads] [t.kill() for t in threads] self.assertEqual(1, len(sqlite_lock.get_locks())) sqlite_lock.cleanup() self.assertEqual(0, len(sqlite_lock.get_locks())) def _run_correct_locking(self, wf_ex): # Set context info for the thread. auth_context.set_ctx(test_base.get_context()) self._random_sleep() with db_api.transaction(): # Lock workflow execution and get the most up-to-date object. wf_ex = db_api.acquire_lock(db_models.WorkflowExecution, wf_ex.id) # Refresh the object. db_api.get_workflow_execution(wf_ex.id) wf_ex.name = str(int(wf_ex.name) + 1) return wf_ex.name def test_correct_locking(self): wf_ex = db_api.create_workflow_execution(WF_EXEC) threads = [] number = 500 for i in range(1, number): threads.append( eventlet.spawn(self._run_correct_locking, wf_ex) ) [t.wait() for t in threads] [t.kill() for t in threads] wf_ex = db_api.get_workflow_execution(wf_ex.id) print("Correct locking test gave object name: %s" % wf_ex.name) self.assertEqual(str(number), wf_ex.name) mistral-6.0.0/mistral/tests/unit/db/v2/__init__.py0000666000175100017510000000000013245513261022027 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/db/v2/test_db_model.py0000666000175100017510000000504313245513261023110 0ustar zuulzuul00000000000000# Copyright 2017 - Nokia Networks. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import datetime from mistral.db.v2.sqlalchemy import api as db_api from mistral.tests.unit import base as test_base from mistral import utils WF_EXEC = { 'id': 'c0f3be41-88b9-4c86-a669-83e77cd0a1b8', 'spec': {}, 'params': {'task': 'my_task1'}, 'project_id': '', 'scope': 'PUBLIC', 'state': 'IDLE', 'state_info': "Running...", 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0), 'updated_at': None, 'context': None, 'task_execution_id': None, 'description': None, 'output': None, 'accepted': False, 'some_invalid_field': "foobar" } class DBModelTest(test_base.DbTestCase): def test_iterate_column_names(self): wf_ex = db_api.create_workflow_execution(WF_EXEC) self.assertIsNotNone(wf_ex) c_names = [c_name for c_name in wf_ex.iter_column_names()] expected = set(WF_EXEC.keys()) expected.remove('some_invalid_field') self.assertEqual(expected, set(c_names)) def test_iterate_columns(self): wf_ex = db_api.create_workflow_execution(WF_EXEC) self.assertIsNotNone(wf_ex) values = {c_name: c_val for c_name, c_val in wf_ex.iter_columns()} expected = copy.copy(WF_EXEC) del expected['some_invalid_field'] self.assertDictEqual(expected, values) def test_to_dict(self): wf_ex = db_api.create_workflow_execution(WF_EXEC) self.assertIsNotNone(wf_ex) expected = copy.copy(WF_EXEC) del expected['some_invalid_field'] actual = wf_ex.to_dict() # The method to_dict() returns date as strings. So, we have to # check them separately. self.assertEqual( utils.datetime_to_str(expected['created_at']), actual['created_at'] ) # Now check the rest of the columns. del expected['created_at'] del actual['created_at'] self.assertDictEqual(expected, actual) mistral-6.0.0/mistral/tests/unit/db/__init__.py0000666000175100017510000000000013245513261021500 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/config.py0000666000175100017510000000174013245513261020635 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from oslo_config import cfg def parse_args(): # Look for .mistral.conf in the project directory by default. project_dir = '%s/../../..' % os.path.dirname(__file__) config_file = '%s/.mistral.conf' % os.path.realpath(project_dir) config_files = [config_file] if os.path.isfile(config_file) else None cfg.CONF(args=[], default_config_files=config_files) mistral-6.0.0/mistral/tests/unit/engine/0000775000175100017510000000000013245513604020260 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/engine/test_dataflow.py0000666000175100017510000007721613245513261023510 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral import expressions as expr from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit import base as test_base from mistral.tests.unit.engine import base as engine_test_base from mistral.workflow import data_flow from mistral.workflow import states import sys # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class DataFlowEngineTest(engine_test_base.EngineTestCase): def test_linear_dataflow(self): linear_wf = """--- version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Hi" publish: hi: <% task(task1).result %> on-success: - task2 task2: action: std.echo output="Morpheus" publish: to: <% task(task2).result %> on-success: - task3 task3: publish: result: "<% $.hi %>, <% $.to %>! Your <% env().from %>." """ wf_service.create_workflows(linear_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf', env={'from': 'Neo'}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(states.SUCCESS, task3.state) self.assertDictEqual({'hi': 'Hi'}, task1.published) self.assertDictEqual({'to': 'Morpheus'}, task2.published) self.assertDictEqual( {'result': 'Hi, Morpheus! Your Neo.'}, task3.published ) # Make sure that task inbound context doesn't contain workflow # execution info. self.assertNotIn('__execution', task1.in_context) def test_linear_with_branches_dataflow(self): linear_with_branches_wf = """--- version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Hi" publish: hi: <% task(task1).result %> progress: "completed task1" on-success: - notify - task2 task2: action: std.echo output="Morpheus" publish: to: <% task(task2).result %> progress: "completed task2" on-success: - notify - task3 task3: publish: result: "<% $.hi %>, <% $.to %>! Your <% env().from %>." progress: "completed task3" on-success: - notify notify: action: std.echo output=<% $.progress %> publish: progress: <% task(notify).result %> """ wf_service.create_workflows(linear_with_branches_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf', env={'from': 'Neo'}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') notify_tasks = self._assert_multiple_items(tasks, 3, name='notify') notify_published_arr = [t.published['progress'] for t in notify_tasks] self.assertEqual(states.SUCCESS, task3.state) exp_published_arr = [ { 'hi': 'Hi', 'progress': 'completed task1' }, { 'to': 'Morpheus', 'progress': 'completed task2' }, { 'result': 'Hi, Morpheus! Your Neo.', 'progress': 'completed task3' } ] self.assertDictEqual(exp_published_arr[0], task1.published) self.assertDictEqual(exp_published_arr[1], task2.published) self.assertDictEqual(exp_published_arr[2], task3.published) self.assertIn( exp_published_arr[0]['progress'], notify_published_arr ) self.assertIn( exp_published_arr[1]['progress'], notify_published_arr ) self.assertIn( exp_published_arr[2]['progress'], notify_published_arr ) def test_parallel_tasks(self): parallel_tasks_wf = """--- version: '2.0' wf: type: direct tasks: task1: action: std.echo output=1 publish: var1: <% task(task1).result %> task2: action: std.echo output=2 publish: var2: <% task(task2).result %> """ wf_service.create_workflows(parallel_tasks_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf',) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(2, len(tasks)) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.SUCCESS, task2.state) self.assertDictEqual({'var1': 1}, task1.published) self.assertDictEqual({'var2': 2}, task2.published) self.assertEqual(1, wf_output['var1']) self.assertEqual(2, wf_output['var2']) def test_parallel_tasks_complex(self): parallel_tasks_complex_wf = """--- version: '2.0' wf: type: direct tasks: task1: action: std.noop publish: var1: 1 on-complete: - task12 task12: action: std.noop publish: var12: 12 on-complete: - task13 - task14 task13: action: std.fail description: | Since this task fails we expect that 'var13' won't go into context. Only 'var14'. publish: var13: 13 on-error: - noop task14: publish: var14: 14 task2: publish: var2: 2 on-complete: - task21 task21: publish: var21: 21 """ wf_service.create_workflows(parallel_tasks_complex_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(6, len(tasks)) task1 = self._assert_single_item(tasks, name='task1') task12 = self._assert_single_item(tasks, name='task12') task13 = self._assert_single_item(tasks, name='task13') task14 = self._assert_single_item(tasks, name='task14') task2 = self._assert_single_item(tasks, name='task2') task21 = self._assert_single_item(tasks, name='task21') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.SUCCESS, task12.state) self.assertEqual(states.ERROR, task13.state) self.assertEqual(states.SUCCESS, task14.state) self.assertEqual(states.SUCCESS, task2.state) self.assertEqual(states.SUCCESS, task21.state) self.assertDictEqual({'var1': 1}, task1.published) self.assertDictEqual({'var12': 12}, task12.published) self.assertDictEqual({'var14': 14}, task14.published) self.assertDictEqual({'var2': 2}, task2.published) self.assertDictEqual({'var21': 21}, task21.published) self.assertEqual(1, wf_output['var1']) self.assertEqual(12, wf_output['var12']) self.assertNotIn('var13', wf_output) self.assertEqual(14, wf_output['var14']) self.assertEqual(2, wf_output['var2']) self.assertEqual(21, wf_output['var21']) def test_sequential_tasks_publishing_same_var(self): var_overwrite_wf = """--- version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Hi" publish: greeting: <% task(task1).result %> on-success: - task2 task2: action: std.echo output="Yo" publish: greeting: <% task(task2).result %> on-success: - task3 task3: action: std.echo output="Morpheus" publish: to: <% task(task3).result %> on-success: - task4 task4: publish: result: "<% $.greeting %>, <% $.to %>! <% env().from %>." """ wf_service.create_workflows(var_overwrite_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf', env={'from': 'Neo'}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') task4 = self._assert_single_item(tasks, name='task4') self.assertEqual(states.SUCCESS, task4.state) self.assertDictEqual({'greeting': 'Hi'}, task1.published) self.assertDictEqual({'greeting': 'Yo'}, task2.published) self.assertDictEqual({'to': 'Morpheus'}, task3.published) self.assertDictEqual( {'result': 'Yo, Morpheus! Neo.'}, task4.published ) def test_sequential_tasks_publishing_same_structured(self): var_overwrite_wf = """--- version: '2.0' wf: type: direct tasks: task1: publish: greeting: {"a": "b"} on-success: - task2 task2: publish: greeting: {} on-success: - task3 task3: publish: result: <% $.greeting %> """ wf_service.create_workflows(var_overwrite_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf', env={'from': 'Neo'}) self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(states.SUCCESS, task3.state) self.assertDictEqual({'greeting': {'a': 'b'}}, task1.published) self.assertDictEqual({'greeting': {}}, task2.published) self.assertDictEqual({'result': {}}, task3.published) def test_linear_dataflow_implicit_publish(self): linear_wf = """--- version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Hi" on-success: - task21 - task22 task21: action: std.echo output="Morpheus" on-success: - task4 task22: action: std.echo output="Neo" on-success: - task4 task4: join: all publish: result: > <% task(task1).result %>, <% task(task21).result %>! Your <% task(task22).result %>. """ wf_service.create_workflows(linear_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task4 = self._assert_single_item(tasks, name='task4') self.assertDictEqual( {'result': 'Hi, Morpheus! Your Neo.\n'}, task4.published ) def test_destroy_result(self): linear_wf = """--- version: '2.0' wf: type: direct tasks: task1: action: std.echo output=["Hi", "John Doe!"] publish: hi: <% task(task1).result %> keep-result: false """ wf_service.create_workflows(linear_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') result = data_flow.get_task_execution_result(task1) # Published vars are saved. self.assertDictEqual( {'hi': ["Hi", "John Doe!"]}, task1.published ) # But all result is cleared. self.assertIsNone(result) def test_empty_with_items(self): wf = """--- version: "2.0" wf1_with_items: type: direct tasks: task1: with-items: i in <% list() %> action: std.echo output= "Task 1.<% $.i %>" publish: result: <% task(task1).result %> """ wf_service.create_workflows(wf) # Start workflow. wf_ex = self.engine.start_workflow('wf1_with_items') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task1 = self._assert_single_item( wf_ex.task_executions, name='task1' ) result = data_flow.get_task_execution_result(task1) self.assertListEqual([], result) def test_publish_on_error(self): wf_def = """--- version: '2.0' wf: type: direct output-on-error: out: <% $.hi %> tasks: task1: action: std.fail publish-on-error: hi: hello_from_error err: <% task(task1).result %> """ wf_service.create_workflows(wf_def) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) task1 = self._assert_single_item(tasks, name='task1') self.assertEqual(states.ERROR, task1.state) self.assertEqual('hello_from_error', task1.published['hi']) self.assertIn( 'Fail action expected exception', task1.published['err'] ) self.assertEqual('hello_from_error', wf_output['out']) self.assertIn( 'Fail action expected exception', wf_output['result'] ) def test_output_on_error_wb_yaql_failed(self): wb_def = """--- version: '2.0' name: wb workflows: wf1: type: direct output-on-error: message: <% $.message %> tasks: task1: workflow: wf2 publish-on-error: message: <% task(task1).result.message %> wf2: type: direct output-on-error: message: <% $.not_existing_variable %> tasks: task1: action: std.fail publish-on-error: message: <% task(task1).result %> """ wb_service.create_workbook_v2(wb_def) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIn('Failed to evaluate expression in output-on-error!', wf_ex.state_info) self.assertIn('$.message', wf_ex.state_info) task1 = self._assert_single_item(tasks, name='task1') self.assertIn('task(task1).result.message', task1.state_info) def test_size_of_output_by_execution_field_size_limit_kb(self): wf_text = """ version: '2.0' wf: type: direct output-on-error: custom_error: The action in the task does not exists tasks: task1: action: wrong.task """ # Note: The number 1121 below added as value for field size # limit is because the output of workflow error comes as # workflow error string + custom error message and total length # might be greater than 1121. It varies depending on the length # of the custom message. This is a random number value used for # test case only. cfg.CONF.set_default( 'execution_field_size_limit_kb', 1121, group='engine' ) kilobytes = cfg.CONF.engine.execution_field_size_limit_kb bytes_per_char = sys.getsizeof('s') - sys.getsizeof('') total_output_length = int(kilobytes * 1024 / bytes_per_char) wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf', '', None) self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output self.assertLess( len(str(wf_output.get('custom_error'))), total_output_length ) def test_override_json_input(self): wf_text = """--- version: 2.0 wf: input: - a: aa: aa bb: bb tasks: task1: action: std.noop publish: published_a: <% $.a %> """ wf_service.create_workflows(wf_text) wf_input = { 'a': { 'cc': 'cc', 'dd': 'dd' } } # Start workflow. wf_ex = self.engine.start_workflow('wf', wf_input=wf_input) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task1 = wf_ex.task_executions[0] self.assertDictEqual(wf_input['a'], task1.published['published_a']) def test_branch_publishing_success(self): wf_text = """--- version: 2.0 wf: tasks: task1: action: std.noop on-success: publish: branch: my_var: my branch value next: task2 task2: action: std.echo output=<% $.my_var %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') self._assert_single_item(tasks, name='task2') self.assertDictEqual({"my_var": "my branch value"}, task1.published) def test_global_publishing_success_access_via_root_context_(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output="Hi" on-success: publish: global: my_var: <% task().result %> next: - task2 task2: action: std.echo output=<% $.my_var %> publish: result: <% task().result %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertDictEqual({'result': 'Hi'}, task2.published) def test_global_publishing_error_access_via_root_context(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.fail on-success: publish: global: my_var: "We got success" next: - task2 on-error: publish: global: my_var: "We got an error" next: - task2 task2: action: std.echo output=<% $.my_var %> publish: result: <% task().result %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertDictEqual({'result': 'We got an error'}, task2.published) def test_global_publishing_success_access_via_function(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop on-success: publish: branch: my_var: Branch local value global: my_var: Global value next: - task2 task2: action: std.noop publish: local: <% $.my_var %> global: <% global(my_var) %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertDictEqual( { 'local': 'Branch local value', 'global': 'Global value' }, task2.published ) def test_global_publishing_error_access_via_function(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.fail on-error: publish: branch: my_var: Branch local value global: my_var: Global value next: - task2 task2: action: std.noop publish: local: <% $.my_var %> global: <% global(my_var) %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertDictEqual( { 'local': 'Branch local value', 'global': 'Global value' }, task2.published ) class DataFlowTest(test_base.BaseTest): def test_get_task_execution_result(self): task_ex = models.TaskExecution( name='task1', spec={ "version": '2.0', 'name': 'task1', 'with-items': 'var in [1]', 'type': 'direct', 'action': 'my_action' }, runtime_context={ 'with_items': {'count': 1} } ) task_ex.action_executions = [models.ActionExecution( name='my_action', output={'result': 1}, accepted=True, runtime_context={'index': 0} )] self.assertEqual([1], data_flow.get_task_execution_result(task_ex)) task_ex.action_executions.append(models.ActionExecution( name='my_action', output={'result': 1}, accepted=True, runtime_context={'index': 0} )) task_ex.action_executions.append(models.ActionExecution( name='my_action', output={'result': 1}, accepted=False, runtime_context={'index': 0} )) self.assertEqual( [1, 1], data_flow.get_task_execution_result(task_ex) ) def test_context_view(self): ctx = data_flow.ContextView( { 'k1': 'v1', 'k11': 'v11', 'k3': 'v3' }, { 'k2': 'v2', 'k21': 'v21', 'k3': 'v32' } ) self.assertIsInstance(ctx, dict) self.assertEqual(5, len(ctx)) self.assertIn('k1', ctx) self.assertIn('k11', ctx) self.assertIn('k3', ctx) self.assertIn('k2', ctx) self.assertIn('k21', ctx) self.assertEqual('v1', ctx['k1']) self.assertEqual('v1', ctx.get('k1')) self.assertEqual('v11', ctx['k11']) self.assertEqual('v11', ctx.get('k11')) self.assertEqual('v3', ctx['k3']) self.assertEqual('v2', ctx['k2']) self.assertEqual('v2', ctx.get('k2')) self.assertEqual('v21', ctx['k21']) self.assertEqual('v21', ctx.get('k21')) self.assertIsNone(ctx.get('Not existing key')) self.assertRaises(exc.MistralError, ctx.update) self.assertRaises(exc.MistralError, ctx.clear) self.assertRaises(exc.MistralError, ctx.pop, 'k1') self.assertRaises(exc.MistralError, ctx.popitem) self.assertRaises(exc.MistralError, ctx.__setitem__, 'k5', 'v5') self.assertRaises(exc.MistralError, ctx.__delitem__, 'k2') self.assertEqual('v1', expr.evaluate('<% $.k1 %>', ctx)) self.assertEqual('v2', expr.evaluate('<% $.k2 %>', ctx)) self.assertEqual('v3', expr.evaluate('<% $.k3 %>', ctx)) # Now change the order of dictionaries and make sure to have # a different for key 'k3'. ctx = data_flow.ContextView( { 'k2': 'v2', 'k21': 'v21', 'k3': 'v32' }, { 'k1': 'v1', 'k11': 'v11', 'k3': 'v3' } ) self.assertEqual('v32', expr.evaluate('<% $.k3 %>', ctx)) def test_context_view_eval_root_with_yaql(self): ctx = data_flow.ContextView( {'k1': 'v1'}, {'k2': 'v2'} ) res = expr.evaluate('<% $ %>', ctx) self.assertIsNotNone(res) self.assertIsInstance(res, dict) self.assertEqual(2, len(res)) def test_context_view_eval_keys(self): ctx = data_flow.ContextView( {'k1': 'v1'}, {'k2': 'v2'} ) res = expr.evaluate('<% $.keys() %>', ctx) self.assertIsNotNone(res) self.assertIsInstance(res, list) self.assertEqual(2, len(res)) self.assertIn('k1', res) self.assertIn('k2', res) def test_context_view_eval_values(self): ctx = data_flow.ContextView( {'k1': 'v1'}, {'k2': 'v2'} ) res = expr.evaluate('<% $.values() %>', ctx) self.assertIsNotNone(res) self.assertIsInstance(res, list) self.assertEqual(2, len(res)) self.assertIn('v1', res) self.assertIn('v2', res) mistral-6.0.0/mistral/tests/unit/engine/test_yaql_functions.py0000666000175100017510000002475613245513262024747 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base as engine_test_base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class YAQLFunctionsEngineTest(engine_test_base.EngineTestCase): def test_task_function(self): wf_text = """--- version: '2.0' wf: type: direct tasks: task1: description: This is task 1 tags: ['t1'] action: std.echo output=1 publish: name: <% task(task1).name %> description: <% task(task1).spec.description %> tags: <% task(task1).spec.tags%> state: <% task(task1).state %> state_info: <% task(task1).state_info %> res: <% task(task1).result %> on-success: - task2 task2: action: std.echo output=<% task(task1).result + 1 %> publish: name: <% task(task1).name %> description: <% task(task1).spec.description %> tags: <% task(task1).spec.tags%> state: <% task(task1).state %> state_info: <% task(task1).state_info %> res: <% task(task1).result %> task2_res: <% task(task2).result %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) task1 = self._assert_single_item( tasks, name='task1', state=states.SUCCESS ) task2 = self._assert_single_item( tasks, name='task2', state=states.SUCCESS ) self.assertDictEqual( { 'name': 'task1', 'description': 'This is task 1', 'tags': ['t1'], 'state': states.SUCCESS, 'state_info': None, 'res': 1 }, task1.published ) self.assertDictEqual( { 'name': 'task1', 'description': 'This is task 1', 'tags': ['t1'], 'state': states.SUCCESS, 'state_info': None, 'res': 1, 'task2_res': 2 }, task2.published ) def test_task_function_returns_null(self): wf_text = """--- version: '2.0' wf: output: task2: <% task(task2) %> task2bool: <% task(task2) = null %> tasks: task1: action: std.noop on-success: - task2: <% false %> task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual( { 'task2': None, 'task2bool': True }, wf_ex.output ) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) def test_task_function_non_existing(self): wf_text = """--- version: '2.0' wf: type: direct output: task_name: <% task(non_existing_task).name %> tasks: task1: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.ERROR, wf_ex.state) self.assertIn('non_existing_task', wf_ex.state_info) def test_task_function_no_arguments(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output=1 publish: task1_id: <% task().id %> task1_result: <% task().result %> task1_state: <% task().state %> on-success: task2 task2: action: std.echo output=2 publish: task2_id: <% task().id %> task2_result: <% task().result %> task2_state: <% task().state %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task2_ex = self._assert_single_item( wf_ex.task_executions, name='task2' ) self.assertDictEqual( { 'task1_id': task1_ex.id, 'task1_result': 1, 'task1_state': states.SUCCESS }, task1_ex.published ) self.assertDictEqual( { 'task2_id': task2_ex.id, 'task2_result': 2, 'task2_state': states.SUCCESS }, task2_ex.published ) def test_task_function_no_name_on_complete_case(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output=1 on-complete: - fail(msg=<% task() %>) """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertIn(wf_ex.id, wf_ex.state_info) def test_task_function_no_name_on_success_case(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output=1 on-success: - task2: <% task().result = 1 %> - task3: <% task().result = 100 %> task2: action: std.echo output=2 task3: action: std.echo output=3 """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(2, len(wf_ex.task_executions)) self._assert_single_item(wf_ex.task_executions, name='task1') self._assert_single_item(wf_ex.task_executions, name='task2') def test_uuid_function(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output=<% uuid() %> publish: result: <% task(task1).result %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = task_execs[0] result = task_ex.published['result'] self.assertIsNotNone(result) self.assertEqual(36, len(result)) self.assertEqual(4, result.count('-')) def test_execution_function(self): wf_text = """--- version: '2.0' wf: input: - k1 - k2: v2_default tasks: task1: action: std.echo output=<% execution() %> publish: result: <% task(task1).result %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow( 'wf', wf_input={'k1': 'v1'}, param1='blablabla' ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = task_execs[0] execution = task_ex.published['result'] self.assertIsInstance(execution, dict) spec = execution['spec'] self.assertEqual('2.0', spec['version']) self.assertEqual('wf', spec['name']) self.assertIn('tasks', spec) self.assertEqual(1, len(spec['tasks'])) self.assertDictEqual( { 'k1': 'v1', 'k2': 'v2_default' }, execution['input'] ) self.assertDictEqual( {'param1': 'blablabla', 'namespace': ''}, execution['params'] ) self.assertEqual( wf_ex.created_at.isoformat(' '), execution['created_at'] ) mistral-6.0.0/mistral/tests/unit/engine/test_execution_fields_size_limitation.py0000666000175100017510000001745713245513262030525 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral_lib import actions as actions_base from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workflows as wf_service from mistral.tests.unit import base as test_base from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WF = """ --- version: '2.0' wf: input: - workflow_input: '__WORKFLOW_INPUT__' - action_output_length: 0 - action_output_dict: false - action_error: false tasks: task1: action: my_action input: input: '__ACTION_INPUT__' output_length: <% $.action_output_length %> output_dict: <% $.action_output_dict %> error: <% $.action_error %> publish: p_var: '__TASK_PUBLISHED__' """ class MyAction(actions_base.Action): def __init__(self, input, output_length, output_dict=False, error=False): self.input = input self.output_length = output_length self.output_dict = output_dict self.error = error def run(self, context): if not self.output_dict: result = ''.join('A' for _ in range(self.output_length)) else: result = {} for i in range(self.output_length): result[i] = 'A' if not self.error: return actions_base.Result(data=result) else: return actions_base.Result(error=result) def test(self): raise NotImplementedError def generate_workflow(tokens): new_wf = WF long_string = ''.join('A' for _ in range(1024)) for token in tokens: new_wf = new_wf.replace(token, long_string) return new_wf class ExecutionFieldsSizeLimitTest(base.EngineTestCase): def setUp(self): """Resets the size limit config between tests""" super(ExecutionFieldsSizeLimitTest, self).setUp() cfg.CONF.set_default( 'execution_field_size_limit_kb', 0, group='engine' ) test_base.register_action_class('my_action', MyAction) def tearDown(self): """Restores the size limit config to default""" super(ExecutionFieldsSizeLimitTest, self).tearDown() cfg.CONF.set_default( 'execution_field_size_limit_kb', 1024, group='engine' ) def test_default_limit(self): cfg.CONF.set_default( 'execution_field_size_limit_kb', -1, group='engine' ) new_wf = generate_workflow( ['__ACTION_INPUT_', '__WORKFLOW_INPUT__', '__TASK_PUBLISHED__'] ) wf_service.create_workflows(new_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) def test_workflow_input_default_value_limit(self): new_wf = generate_workflow(['__WORKFLOW_INPUT__']) wf_service.create_workflows(new_wf) # Start workflow. e = self.assertRaises( exc.SizeLimitExceededException, self.engine.start_workflow, 'wf' ) self.assertEqual( "Size of 'input' is 1KB which exceeds the limit of 0KB", str(e) ) def test_workflow_input_limit(self): wf_service.create_workflows(WF) # Start workflow. e = self.assertRaises( exc.SizeLimitExceededException, self.engine.start_workflow, 'wf', wf_input={'workflow_input': ''.join('A' for _ in range(1024))} ) self.assertEqual( "Size of 'input' is 1KB which exceeds the limit of 0KB", str(e) ) def test_action_input_limit(self): new_wf = generate_workflow(['__ACTION_INPUT__']) wf_service.create_workflows(new_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.assertEqual(states.ERROR, wf_ex.state) self.assertIn( "Size of 'input' is 1KB which exceeds the limit of 0KB", wf_ex.state_info ) def test_action_output_limit(self): wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow( 'wf', wf_input={'action_output_length': 1024} ) self.await_workflow_error(wf_ex.id) # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertIn( "Size of 'output' is 1KB which exceeds the limit of 0KB", wf_ex.state_info ) self.assertEqual(states.ERROR, wf_ex.state) def test_task_published_limit(self): new_wf = generate_workflow(['__TASK_PUBLISHED__']) wf_service.create_workflows(new_wf) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertIn( 'Failed to handle action completion [error=Size of', wf_ex.state_info ) self.assertIn('wf=wf, task=task1', wf_ex.state_info) task_ex = self._assert_single_item(task_execs, name='task1') self.assertIn( "Size of 'published' is 1KB which exceeds the limit of 0KB", task_ex.state_info ) def test_workflow_params_limit(self): wf_service.create_workflows(WF) # Start workflow. long_string = ''.join('A' for _ in range(1024)) e = self.assertRaises( exc.SizeLimitExceededException, self.engine.start_workflow, 'wf', env={'param': long_string} ) self.assertIn( "Size of 'params' is 1KB which exceeds the limit of 0KB", str(e) ) def test_task_execution_state_info_trimmed(self): # No limit on output, input and other JSON fields. cfg.CONF.set_default( 'execution_field_size_limit_kb', -1, group='engine' ) wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow( 'wf', wf_input={ 'action_output_length': 80000, 'action_output_dict': True, 'action_error': True } ) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = self._assert_single_item( wf_ex.task_executions, state=states.ERROR ) # "state_info" must be trimmed so that it's not greater than 65535. self.assertLess(len(task_ex.state_info), 65536) self.assertGreater(len(task_ex.state_info), 65490) self.assertLess(len(wf_ex.state_info), 65536) self.assertGreater(len(wf_ex.state_info), 65490) mistral-6.0.0/mistral/tests/unit/engine/test_task_defaults.py0000666000175100017510000001505713245513262024534 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime as dt import mock from oslo_config import cfg import requests from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class TaskDefaultsDirectWorkflowEngineTest(base.EngineTestCase): @mock.patch.object( requests, 'request', mock.MagicMock(side_effect=Exception()) ) def test_task_defaults_on_error(self): wf_text = """--- version: '2.0' wf: type: direct task-defaults: on-error: - task3 tasks: task1: description: That should lead to transition to task3. action: std.http url="http://some_url" on-success: - task2 task2: action: std.echo output="Morpheus" task3: action: std.echo output="output" """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(2, len(tasks)) self.assertEqual(states.ERROR, task1.state) self.assertEqual(states.SUCCESS, task3.state) class TaskDefaultsReverseWorkflowEngineTest(base.EngineTestCase): def test_task_defaults_retry_policy(self): wf_text = """--- version: '2.0' wf: type: reverse task-defaults: retry: count: 2 delay: 1 tasks: task1: action: std.fail task2: action: std.echo output=2 requires: [task1] """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf', task_name='task2') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(1, len(tasks)) task1 = self._assert_single_item( tasks, name='task1', state=states.ERROR ) self.assertGreater( task1.runtime_context['retry_task_policy']['retry_no'], 0 ) def test_task_defaults_timeout_policy(self): wf_text = """--- version: '2.0' wf: type: reverse task-defaults: timeout: 1 tasks: task1: action: std.async_noop task2: action: std.echo output=2 requires: [task1] """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf', task_name='task2') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(1, len(tasks)) self._assert_single_item(tasks, name='task1', state=states.ERROR) task_ex = db_api.get_task_execution(tasks[0].id) self.assertIn("Task timed out", task_ex.state_info) def test_task_defaults_wait_policies(self): wf_text = """--- version: '2.0' wf: type: reverse task-defaults: wait-before: 1 wait-after: 1 tasks: task1: action: std.echo output=1 """ wf_service.create_workflows(wf_text) time_before = dt.datetime.now() # Start workflow. wf_ex = self.engine.start_workflow('wf', task_name='task1') self.await_workflow_success(wf_ex.id) # Workflow must work at least 2 seconds (1+1). self.assertGreater( (dt.datetime.now() - time_before).total_seconds(), 2 ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(1, len(tasks)) self._assert_single_item(tasks, name='task1', state=states.SUCCESS) def test_task_defaults_requires(self): wf_text = """--- version: '2.0' wf: type: reverse task-defaults: requires: [always_do] tasks: task1: action: std.echo output=1 task2: action: std.echo output=1 requires: [task1] always_do: action: std.echo output="Do something" """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf', task_name='task2') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(3, len(tasks)) self._assert_single_item(tasks, name='task1', state=states.SUCCESS) self._assert_single_item(tasks, name='task2', state=states.SUCCESS) self._assert_single_item(tasks, name='always_do', state=states.SUCCESS) mistral-6.0.0/mistral/tests/unit/engine/test_safe_rerun.py0000666000175100017510000001240613245513262024027 0ustar zuulzuul00000000000000# Copyright (c) 2016 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from mistral.db.v2 import api as db_api from mistral.executors import default_executor as d_exe from mistral.executors import remote_executor as r_exe from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import data_flow from mistral.workflow import states def _run_at_target(action_ex_id, action_class_str, attributes, action_params, safe_rerun, execution_context, target=None, async_=True, timeout=None): # We'll just call executor directly for testing purposes. executor = d_exe.DefaultExecutor() executor.run_action( action_ex_id, action_class_str, attributes, action_params, safe_rerun, execution_context, redelivered=True ) MOCK_RUN_AT_TARGET = mock.MagicMock(side_effect=_run_at_target) class TestSafeRerun(base.EngineTestCase): @mock.patch.object(r_exe.RemoteExecutor, 'run_action', MOCK_RUN_AT_TARGET) def test_safe_rerun_true(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop safe-rerun: true on-success: - task2 on-error: - task3 task2: action: std.noop safe-rerun: true task3: action: std.noop safe-rerun: true """ # Note: because every task have redelivered flag set to true in mock # function (_run_at_target), task2 and task3 have to set safe-rerun # to true. wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(len(tasks), 2) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertEqual(task1.state, states.SUCCESS) self.assertEqual(task2.state, states.SUCCESS) @mock.patch.object(r_exe.RemoteExecutor, 'run_action', MOCK_RUN_AT_TARGET) def test_safe_rerun_false(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop safe-rerun: false on-success: - task2 on-error: - task3 task2: action: std.noop safe-rerun: true task3: action: std.noop safe-rerun: true """ # Note: because every task have redelivered flag set to true in mock # function (_run_at_target), task2 and task3 have to set safe-rerun # to true. wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(len(tasks), 2) task1 = self._assert_single_item(tasks, name='task1') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(task1.state, states.ERROR) self.assertEqual(task3.state, states.SUCCESS) @mock.patch.object(r_exe.RemoteExecutor, 'run_action', MOCK_RUN_AT_TARGET) def test_safe_rerun_with_items(self): wf_text = """--- version: '2.0' wf: tasks: task1: with-items: i in [1, 2, 3] action: std.echo output=<% $.i %> safe-rerun: true publish: result: <% task(task1).result %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(len(tasks), 1) task1 = self._assert_single_item(tasks, name='task1') self.assertEqual(task1.state, states.SUCCESS) result = data_flow.get_task_execution_result(task1) self.assertIn(1, result) self.assertIn(2, result) self.assertIn(3, result) mistral-6.0.0/mistral/tests/unit/engine/test_commands.py0000666000175100017510000003154613245513261023504 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK1 = """ --- version: '2.0' name: my_wb workflows: wf: type: direct input: - my_var tasks: task1: action: std.echo output='1' on-complete: - fail: <% $.my_var = 1 %> - succeed: <% $.my_var = 2 %> - pause: <% $.my_var = 3 %> - task2 task2: action: std.echo output='2' """ class SimpleEngineCommandsTest(base.EngineTestCase): def setUp(self): super(SimpleEngineCommandsTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK1) def test_fail(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 1}) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) def test_succeed(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 2}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) def test_pause(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 3}) self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) WORKBOOK2 = """ --- version: '2.0' name: my_wb workflows: wf: type: direct input: - my_var task-defaults: on-complete: - fail: <% $.my_var = 1 %> - succeed: <% $.my_var = 2 %> - pause: <% $.my_var = 3 %> - task2: <% $.my_var = 4 %> # (Never happens in this test) tasks: task1: action: std.echo output='1' task2: action: std.echo output='2' """ class SimpleEngineWorkflowLevelCommandsTest(base.EngineTestCase): def setUp(self): super(SimpleEngineWorkflowLevelCommandsTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK2) def test_fail(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 1}) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) def test_succeed(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 2}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) def test_pause(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 3}) self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) WORKBOOK3 = """ --- version: '2.0' name: my_wb workflows: fail_first_wf: type: direct tasks: task1: action: std.echo output='1' on-complete: - fail - task2 task2: action: std.echo output='2' fail_second_wf: type: direct tasks: task1: action: std.echo output='1' on-complete: - task2 - fail task2: action: std.echo output='2' succeed_first_wf: type: direct tasks: task1: action: std.echo output='1' on-complete: - succeed - task2 task2: action: std.echo output='2' succeed_second_wf: type: direct tasks: task1: action: std.echo output='1' on-complete: - task2 - succeed task2: action: std.http url='some.not.existing.url' """ class OrderEngineCommandsTest(base.EngineTestCase): def setUp(self): super(OrderEngineCommandsTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK3) def test_fail_first(self): wf_ex = self.engine.start_workflow('my_wb.fail_first_wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) def test_fail_second(self): wf_ex = self.engine.start_workflow('my_wb.fail_second_wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) task2_db = self._assert_single_item(task_execs, name='task2') self.await_task_success(task2_db.id) self.await_workflow_error(wf_ex.id) def test_succeed_first(self): wf_ex = self.engine.start_workflow('my_wb.succeed_first_wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) def test_succeed_second(self): wf_ex = self.engine.start_workflow('my_wb.succeed_second_wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) task2_db = self._assert_single_item(task_execs, name='task2') self.await_task_error(task2_db.id) self.await_workflow_success(wf_ex.id) WORKBOOK4 = """ --- version: '2.0' name: my_wb workflows: wf: type: direct input: - my_var tasks: task1: action: std.echo output='1' on-complete: - fail(msg='my_var value is 1'): <% $.my_var = 1 %> - succeed(msg='my_var value is 2'): <% $.my_var = 2 %> - pause(msg='my_var value is 3'): <% $.my_var = 3 %> - task2 task2: action: std.echo output='2' """ class SimpleEngineCmdsWithMsgTest(base.EngineTestCase): def setUp(self): super(SimpleEngineCmdsWithMsgTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK4) def test_fail(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 1}) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertEqual(states.ERROR, wf_ex.state) self.assertEqual('my_var value is 1', wf_ex.state_info) def test_succeed(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 2}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual("my_var value is 2", wf_ex.state_info) def test_pause(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 3}) self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual("my_var value is 3", wf_ex.state_info) WORKBOOK5 = """ --- version: '2.0' name: my_wb workflows: wf: type: direct input: - my_var task-defaults: on-complete: - fail(msg='my_var value is 1'): <% $.my_var = 1 %> - succeed(msg='my_var value is <% $.my_var %>'): <% $.my_var = 2 %> - pause(msg='my_var value is 3'): <% $.my_var = 3 %> - task2: <% $.my_var = 4 %> # (Never happens in this test) tasks: task1: action: std.echo output='1' task2: action: std.echo output='2' """ class SimpleEngineWorkflowLevelCmdsWithMsgTest(base.EngineTestCase): def setUp(self): super(SimpleEngineWorkflowLevelCmdsWithMsgTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK5) def test_fail(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 1}) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertEqual(states.ERROR, wf_ex.state) self.assertEqual("my_var value is 1", wf_ex.state_info) def test_succeed(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 2}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual("my_var value is 2", wf_ex.state_info) def test_pause(self): wf_ex = self.engine.start_workflow('my_wb.wf', wf_input={'my_var': 3}) self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual("my_var value is 3", wf_ex.state_info) mistral-6.0.0/mistral/tests/unit/engine/test_workflow_stop.py0000666000175100017510000000367413245513262024624 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states class WorkflowStopTest(base.EngineTestCase): def setUp(self): super(WorkflowStopTest, self).setUp() WORKFLOW = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 3 """ wf_service.create_workflows(WORKFLOW) self.exec_id = self.engine.start_workflow('wf').id def test_stop_failed(self): self.engine.stop_workflow(self.exec_id, states.SUCCESS, "Force stop") self.await_workflow_success(self.exec_id) wf_ex = db_api.get_workflow_execution(self.exec_id) self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual("Force stop", wf_ex.state_info) def test_stop_succeeded(self): self.engine.stop_workflow(self.exec_id, states.ERROR, "Failure") self.await_workflow_error(self.exec_id) wf_ex = db_api.get_workflow_execution(self.exec_id) self.assertEqual(states.ERROR, wf_ex.state) self.assertEqual("Failure", wf_ex.state_info) mistral-6.0.0/mistral/tests/unit/engine/test_with_items_task.py0000666000175100017510000000417413245513262025077 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2.sqlalchemy import models from mistral.engine import tasks from mistral.tests.unit import base from mistral.workflow import states # TODO(rakhmerov): This test is a legacy of the previous 'with-items' # implementation when most of its logic was in with_items.py module. # It makes sense to add more test for various methods of WithItemsTask. class WithItemsTaskTest(base.BaseTest): @staticmethod def get_action_ex(accepted, state, index): return models.ActionExecution( accepted=accepted, state=state, runtime_context={'index': index} ) def test_get_next_indices(self): # Task execution for running 6 items with concurrency=3. task_ex = models.TaskExecution( spec={ 'action': 'myaction' }, runtime_context={ 'with_items': { 'capacity': 3, 'count': 6 } }, action_executions=[], workflow_executions=[] ) task = tasks.WithItemsTask(None, None, None, {}, task_ex) # Set 3 items: 2 success and 1 error unaccepted. task_ex.action_executions += [ self.get_action_ex(True, states.SUCCESS, 0), self.get_action_ex(True, states.SUCCESS, 1), self.get_action_ex(False, states.ERROR, 2) ] # Then call get_indices and expect [2, 3, 4]. indexes = task._get_next_indexes() self.assertListEqual([2, 3, 4], indexes) mistral-6.0.0/mistral/tests/unit/engine/test_cron_trigger.py0000666000175100017510000001360313245513261024361 0ustar zuulzuul00000000000000# Copyright 2015 Alcatel-Lucent, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import mock from oslo_config import cfg from mistral import context as auth_ctx from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import periodic from mistral.services import security from mistral.services import triggers from mistral.services import workflows from mistral.tests.unit.engine import base from mistral import utils WORKFLOW_LIST = """ --- version: '2.0' my_wf: type: direct tasks: task1: action: std.echo output='Hi!' """ class ProcessCronTriggerTest(base.EngineTestCase): @mock.patch.object(security, 'create_trust', type('trust', (object,), {'id': 'my_trust_id'})) @mock.patch('mistral.rpc.clients.get_engine_client') def test_start_workflow(self, get_engine_client_mock): cfg.CONF.set_default('auth_enable', True, group='pecan') wf = workflows.create_workflows(WORKFLOW_LIST)[0] t = triggers.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), wf.name, {}, {}, '* * * * * */1', None, None, None ) self.assertEqual('my_trust_id', t.trust_id) cfg.CONF.set_default('auth_enable', False, group='pecan') next_trigger = triggers.get_next_cron_triggers()[0] next_execution_time_before = next_trigger.next_execution_time periodic.process_cron_triggers_v2(None, None) start_wf_mock = get_engine_client_mock.return_value.start_workflow start_wf_mock.assert_called_once() # Check actual parameters of the call. self.assertEqual( ('my_wf', '', None, {}), start_wf_mock.mock_calls[0][1] ) self.assertIn( t.id, start_wf_mock.mock_calls[0][2]['description'] ) next_triggers = triggers.get_next_cron_triggers() self.assertEqual(1, len(next_triggers)) next_trigger = next_triggers[0] next_execution_time_after = next_trigger.next_execution_time # Checking the workflow was executed, by # verifying that the next execution time changed. self.assertNotEqual( next_execution_time_before, next_execution_time_after ) def test_workflow_without_auth(self): cfg.CONF.set_default('auth_enable', False, group='pecan') wf = workflows.create_workflows(WORKFLOW_LIST)[0] triggers.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), wf.name, {}, {}, '* * * * * */1', None, None, None ) next_triggers = triggers.get_next_cron_triggers() self.assertEqual(1, len(next_triggers)) next_trigger = next_triggers[0] next_execution_time_before = next_trigger.next_execution_time periodic.process_cron_triggers_v2(None, None) next_triggers = triggers.get_next_cron_triggers() self.assertEqual(1, len(next_triggers)) next_trigger = next_triggers[0] next_execution_time_after = next_trigger.next_execution_time self.assertNotEqual( next_execution_time_before, next_execution_time_after ) @mock.patch('mistral.services.triggers.validate_cron_trigger_input') def test_create_cron_trigger_with_pattern_and_first_time(self, validate_mock): cfg.CONF.set_default('auth_enable', False, group='pecan') wf = workflows.create_workflows(WORKFLOW_LIST)[0] # Make the first_time 1 sec later than current time, in order to make # it executed by next cron-trigger task. first_time = datetime.datetime.utcnow() + datetime.timedelta(0, 1) # Creates a cron-trigger with pattern and first time, ensure the # cron-trigger can be executed more than once, and cron-trigger will # not be deleted. trigger_name = 'trigger-%s' % utils.generate_unicode_uuid() cron_trigger = triggers.create_cron_trigger( trigger_name, wf.name, {}, {}, '*/1 * * * *', first_time, None, None ) self.assertEqual( first_time, cron_trigger.next_execution_time ) periodic.process_cron_triggers_v2(None, None) # After process_triggers context is set to None, need to reset it. auth_ctx.set_ctx(self.ctx) next_time = triggers.get_next_execution_time( cron_trigger.pattern, cron_trigger.next_execution_time ) cron_trigger_db = db_api.get_cron_trigger(trigger_name) self.assertIsNotNone(cron_trigger_db) self.assertEqual( next_time, cron_trigger_db.next_execution_time ) def test_validate_cron_trigger_input_first_time(self): cfg.CONF.set_default('auth_enable', False, group='pecan') first_time = datetime.datetime.utcnow() + datetime.timedelta(0, 1) self.assertRaises( exc.InvalidModelException, triggers.validate_cron_trigger_input, None, first_time, None ) mistral-6.0.0/mistral/tests/unit/engine/test_adhoc_actions.py0000666000175100017510000001664313245513261024502 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK = """ --- version: '2.0' name: my_wb actions: concat_twice: base: std.echo base-input: output: "<% $.s1 %>+<% $.s2 %>" input: - s1: "a" - s2 output: "<% $ %> and <% $ %>" test_env: base: std.echo base-input: output: '{{ env().foo }}' nested_concat: base: my_wb.concat_twice base-input: s2: '{{ _.n2 }}' input: - n2: 'b' output: nested_concat: '{{ _ }}' missing_base: base: wrong input: - some_input nested_missing_base: base: missing_base input: - some_input workflows: wf1: type: direct input: - str1 - str2 output: workflow_result: <% $.result %> # Access to execution context variables concat_task_result: <% task(concat).result %> # Same but via task name tasks: concat: action: concat_twice s1=<% $.str1 %> s2=<% $.str2 %> publish: result: <% task(concat).result %> wf2: type: direct input: - str1 - str2 output: workflow_result: <% $.result %> # Access to execution context variables concat_task_result: <% task(concat).result %> # Same but via task name tasks: concat: action: concat_twice s2=<% $.str2 %> publish: result: <% task(concat).result %> wf3: type: direct input: - str1 - str2 tasks: concat: action: concat_twice wf4: type: direct input: - str1 output: workflow_result: '{{ _.printenv_result }}' tasks: printenv: action: test_env publish: printenv_result: '{{ task().result }}' wf5: type: direct output: workflow_result: '{{ _.nested_result }}' tasks: nested_test: action: nested_concat publish: nested_result: '{{ task().result }}' wf6: type: direct output: workflow_result: '{{ _.missing_result }}' tasks: missing_action: action: missing_base on-complete: - next_action next_action: publish: missing_result: 'Finished' wf7: type: direct output: workflow_result: '{{ _.missing_result }}' tasks: nested_missing_action: action: nested_missing_base on-complete: - next_action next_action: publish: missing_result: 'Finished' """ class AdhocActionsTest(base.EngineTestCase): def setUp(self): super(AdhocActionsTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK) def test_run_workflow_with_adhoc_action(self): wf_ex = self.engine.start_workflow( 'my_wb.wf1', wf_input={'str1': 'a', 'str2': 'b'} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual( { 'workflow_result': 'a+b and a+b', 'concat_task_result': 'a+b and a+b' }, wf_ex.output ) def test_run_adhoc_action_without_input_value(self): wf_ex = self.engine.start_workflow( 'my_wb.wf2', wf_input={'str1': 'a', 'str2': 'b'} ) self.await_workflow_success(wf_ex.id) expected_output = { 'workflow_result': 'a+b and a+b', 'concat_task_result': 'a+b and a+b' } with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual(expected_output, wf_ex.output) def test_run_adhoc_action_without_sufficient_input_value(self): wf_ex = self.engine.start_workflow( 'my_wb.wf3', wf_input={'str1': 'a', 'str2': 'b'} ) self.assertIn("Invalid input", wf_ex.state_info) self.assertEqual(states.ERROR, wf_ex.state) def test_run_adhoc_action_with_env(self): wf_ex = self.engine.start_workflow( 'my_wb.wf4', wf_input={'str1': 'a'}, env={'foo': 'bar'} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual( { 'workflow_result': 'bar' }, wf_ex.output ) def test_run_nested_adhoc_with_output(self): wf_ex = self.engine.start_workflow('my_wb.wf5') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual( { 'workflow_result': {'nested_concat': 'a+b and a+b'} }, wf_ex.output ) def test_missing_adhoc_action_definition(self): wf_ex = self.engine.start_workflow('my_wb.wf6') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='missing_action') self.assertEqual(states.ERROR, task1.state) def test_nested_missing_adhoc_action_definition(self): wf_ex = self.engine.start_workflow('my_wb.wf7') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item( tasks, name='nested_missing_action' ) self.assertEqual(states.ERROR, task1.state) def test_adhoc_async_action(self): wb_text = """--- version: '2.0' name: my_wb1 actions: my_action: input: - my_param base: std.mistral_http base-input: url: http://google.com/<% $.my_param %> method: GET workflows: my_wf: tasks: task1: action: my_action my_param="asdfasdf" """ wb_service.create_workbook_v2(wb_text) wf_ex = self.engine.start_workflow('my_wb1.my_wf') self.await_workflow_running(wf_ex.id) mistral-6.0.0/mistral/tests/unit/engine/test_policies.py0000666000175100017510000013117713245513272023515 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from eventlet import timeout import mock from oslo_config import cfg import requests from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral.engine import policies from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib.actions import types # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" wait-before: 2 wait-after: 5 timeout: 7 retry: count: 5 delay: 10 break-on: <% $.my_val = 10 %> """ WB_WITH_DEFAULTS = """ --- version: '2.0' name: wb workflows: wf1: type: direct task-defaults: wait-before: 2 retry: count: 2 delay: 1 tasks: task1: action: std.echo output="Hi!" wait-before: 3 wait-after: 5 timeout: 7 """ WAIT_BEFORE_WB = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" wait-before: %d """ WAIT_BEFORE_FROM_VAR = """ --- version: '2.0' name: wb workflows: wf1: type: direct input: - wait_before tasks: task1: action: std.echo output="Hi!" wait-before: <% $.wait_before %> """ WAIT_AFTER_WB = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" wait-after: %d """ WAIT_AFTER_FROM_VAR = """ --- version: '2.0' name: wb workflows: wf1: type: direct input: - wait_after tasks: task1: action: std.echo output="Hi!" wait-after: <% $.wait_after %> """ RETRY_WB = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.http url="http://some_non-existing_host" retry: count: %(count)d delay: %(delay)d """ RETRY_WB_FROM_VAR = """ --- version: '2.0' name: wb workflows: wf1: type: direct input: - count - delay tasks: task1: action: std.http url="http://some_non-existing_host" retry: count: <% $.count %> delay: <% $.delay %> """ TIMEOUT_WB = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.async_noop timeout: %d on-error: - task2 task2: action: std.echo output="Hi!" timeout: 3 """ TIMEOUT_WB2 = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.async_noop timeout: 1 """ TIMEOUT_FROM_VAR = """ --- version: '2.0' name: wb workflows: wf1: type: direct input: - timeout tasks: task1: action: std.async_noop timeout: <% $.timeout %> """ PAUSE_BEFORE_WB = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" pause-before: True on-success: - task2 task2: action: std.echo output="Bye!" """ PAUSE_BEFORE_DELAY_WB = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" wait-before: 1 pause-before: true on-success: - task2 task2: action: std.echo output="Bye!" """ CONCURRENCY_WB = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" concurrency: %d """ CONCURRENCY_WB_FROM_VAR = """ --- version: '2.0' name: wb workflows: wf1: type: direct input: - concurrency tasks: task1: action: std.echo output="Hi!" concurrency: <% $.concurrency %> """ class PoliciesTest(base.EngineTestCase): def setUp(self): super(PoliciesTest, self).setUp() self.wb_spec = spec_parser.get_workbook_spec_from_yaml(WORKBOOK) self.wf_spec = self.wb_spec.get_workflows()['wf1'] self.task_spec = self.wf_spec.get_tasks()['task1'] def test_build_policies(self): arr = policies.build_policies( self.task_spec.get_policies(), self.wf_spec ) self.assertEqual(4, len(arr)) p = self._assert_single_item(arr, delay=2) self.assertIsInstance(p, policies.WaitBeforePolicy) p = self._assert_single_item(arr, delay=5) self.assertIsInstance(p, policies.WaitAfterPolicy) p = self._assert_single_item(arr, delay=10) self.assertIsInstance(p, policies.RetryPolicy) self.assertEqual(5, p.count) self.assertEqual('<% $.my_val = 10 %>', p._break_on_clause) p = self._assert_single_item(arr, delay=7) self.assertIsInstance(p, policies.TimeoutPolicy) def test_task_policy_class(self): policy = policies.base.TaskPolicy() policy._schema = { "properties": { "delay": {"type": "integer"} } } wf_ex = models.WorkflowExecution( id='1-2-3-4', context={}, input={} ) task_ex = models.TaskExecution(in_context={'int_var': 5}) task_ex.workflow_execution = wf_ex policy.delay = "<% $.int_var %>" # Validation is ok. policy.before_task_start(task_ex, None) policy.delay = "some_string" # Validation is failing now. exception = self.assertRaises( exc.InvalidModelException, policy.before_task_start, task_ex, None ) self.assertIn("Invalid data type in TaskPolicy", str(exception)) def test_build_policies_with_workflow_defaults(self): wb_spec = spec_parser.get_workbook_spec_from_yaml(WB_WITH_DEFAULTS) wf_spec = wb_spec.get_workflows()['wf1'] task_spec = wf_spec.get_tasks()['task1'] arr = policies.build_policies(task_spec.get_policies(), wf_spec) self.assertEqual(4, len(arr)) p = self._assert_single_item(arr, delay=3) self.assertIsInstance(p, policies.WaitBeforePolicy) p = self._assert_single_item(arr, delay=5) self.assertIsInstance(p, policies.WaitAfterPolicy) p = self._assert_single_item(arr, delay=1) self.assertIsInstance(p, policies.RetryPolicy) self.assertEqual(2, p.count) p = self._assert_single_item(arr, delay=7) self.assertIsInstance(p, policies.TimeoutPolicy) def test_wait_before_policy(self): wb_service.create_workbook_v2(WAIT_BEFORE_WB % 1) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING_DELAYED, task_ex.state) self.assertDictEqual( {'wait_before_policy': {'skip': True}}, task_ex.runtime_context ) self.await_workflow_success(wf_ex.id) def test_wait_before_policy_zero_seconds(self): wb_service.create_workbook_v2(WAIT_BEFORE_WB % 0) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.await_workflow_success(wf_ex.id) def test_wait_before_policy_negative_number(self): self.assertRaises( exc.InvalidModelException, wb_service.create_workbook_v2, WAIT_BEFORE_WB % -1 ) def test_wait_before_policy_from_var(self): wb_service.create_workbook_v2(WAIT_BEFORE_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'wait_before': 1} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING_DELAYED, task_ex.state) self.await_workflow_success(wf_ex.id) def test_wait_before_policy_from_var_zero_seconds(self): wb_service.create_workbook_v2(WAIT_BEFORE_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'wait_before': 0} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] # If wait_before is 0 start the task immediately without delay. self.assertEqual(states.RUNNING, task_ex.state) self.await_workflow_success(wf_ex.id) def test_wait_before_policy_from_var_negative_number(self): wb_service.create_workbook_v2(WAIT_BEFORE_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'wait_before': -1} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] # If wait_before value is less than 0 the task should fail with # InvalidModelException. self.assertEqual(states.ERROR, task_ex.state) self.await_workflow_error(wf_ex.id) def test_wait_before_policy_two_tasks(self): wf_text = """--- version: '2.0' wf: tasks: a: wait-before: 2 on-success: b b: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) self._assert_multiple_items(task_execs, 2, state=states.SUCCESS) def test_wait_after_policy(self): wb_service.create_workbook_v2(WAIT_AFTER_WB % 2) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.await_task_delayed(task_ex.id, delay=0.5) self.await_task_success(task_ex.id) def test_wait_after_policy_zero_seconds(self): wb_service.create_workbook_v2(WAIT_AFTER_WB % 0) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) try: self.await_task_delayed(task_ex.id, delay=0.5) except AssertionError: # There was no delay as expected. pass else: self.fail("Shouldn't happen") self.await_task_success(task_ex.id) def test_wait_after_policy_negative_number(self): self.assertRaises( exc.InvalidModelException, wb_service.create_workbook_v2, WAIT_AFTER_WB % -1 ) def test_wait_after_policy_from_var(self): wb_service.create_workbook_v2(WAIT_AFTER_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'wait_after': 2} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.await_task_delayed(task_ex.id, delay=0.5) self.await_task_success(task_ex.id) def test_wait_after_policy_from_var_zero_seconds(self): wb_service.create_workbook_v2(WAIT_AFTER_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'wait_after': 0} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) try: self.await_task_delayed(task_ex.id, delay=0.5) except AssertionError: # There was no delay as expected. pass else: self.fail("Shouldn't happen") self.await_task_success(task_ex.id) def test_wait_after_policy_from_var_negative_number(self): wb_service.create_workbook_v2(WAIT_AFTER_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'wait_after': -1} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] # If wait_after value is less than 0 the task should fail with # InvalidModelException. self.assertEqual(states.ERROR, task_ex.state) self.await_workflow_error(wf_ex.id) self.assertDictEqual({}, task_ex.runtime_context) @mock.patch.object( requests, 'request', mock.MagicMock(side_effect=Exception()) ) def test_retry_policy(self): wb_service.create_workbook_v2(RETRY_WB % {'count': 3, 'delay': 1}) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.await_task_delayed(task_ex.id, delay=0.5) self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( 3, task_ex.runtime_context["retry_task_policy"]["retry_no"] ) @mock.patch.object( requests, 'request', mock.MagicMock(side_effect=Exception()) ) def test_retry_policy_zero_count(self): wb_service.create_workbook_v2(RETRY_WB % {'count': 0, 'delay': 1}) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) try: self.await_task_delayed(task_ex.id, delay=0.5) except AssertionError: # There were no scheduled tasks as expected. pass else: self.fail("Shouldn't happen") self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) self.assertNotIn("retry_task_policy", task_ex.runtime_context) @mock.patch.object( requests, 'request', mock.MagicMock(side_effect=Exception()) ) def test_retry_policy_negative_numbers(self): # Negative delay is not accepted. self.assertRaises( exc.InvalidModelException, wb_service.create_workbook_v2, RETRY_WB % {'count': 1, 'delay': -1} ) # Negative count is not accepted. self.assertRaises( exc.InvalidModelException, wb_service.create_workbook_v2, RETRY_WB % {'count': -1, 'delay': 1} ) @mock.patch.object( requests, 'request', mock.MagicMock(side_effect=Exception()) ) def test_retry_policy_from_var(self): wb_service.create_workbook_v2(RETRY_WB_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'count': 3, 'delay': 1} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.await_task_delayed(task_ex.id, delay=0.5) self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( 3, task_ex.runtime_context["retry_task_policy"]["retry_no"] ) @mock.patch.object( requests, 'request', mock.MagicMock(side_effect=Exception()) ) def test_retry_policy_from_var_zero_iterations(self): wb_service.create_workbook_v2(RETRY_WB_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'count': 0, 'delay': 1} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) try: self.await_task_delayed(task_ex.id, delay=0.5) except AssertionError: # There were no scheduled tasks as expected. pass else: self.fail("Shouldn't happen") self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) self.assertNotIn("retry_task_policy", task_ex.runtime_context) @mock.patch.object( requests, 'request', mock.MagicMock(side_effect=Exception()) ) def test_retry_policy_from_var_negative_numbers(self): wb_service.create_workbook_v2(RETRY_WB_FROM_VAR) # Start workflow with negative count. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'count': -1, 'delay': 1} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.ERROR, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.await_workflow_error(wf_ex.id) # Start workflow with negative delay. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'count': 1, 'delay': -1} ) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.ERROR, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.await_workflow_error(wf_ex.id) def test_retry_policy_never_happen(self): retry_wb = """--- version: '2.0' name: wb workflows: wf1: tasks: task1: action: std.echo output="hello" retry: count: 3 delay: 1 """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_success(task_ex.id) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( {}, task_ex.runtime_context["retry_task_policy"] ) def test_retry_policy_break_on(self): retry_wb = """--- version: '2.0' name: wb workflows: wf1: input: - var: 4 tasks: task1: action: std.fail retry: count: 3 delay: 1 break-on: <% $.var >= 3 %> """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( {}, task_ex.runtime_context["retry_task_policy"] ) def test_retry_policy_break_on_not_happened(self): retry_wb = """--- version: '2.0' name: wb workflows: wf1: input: - var: 2 tasks: task1: action: std.fail retry: count: 3 delay: 1 break-on: <% $.var >= 3 %> """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( 3, task_ex.runtime_context['retry_task_policy']['retry_no'] ) @mock.patch.object( std_actions.EchoAction, 'run', mock.Mock(side_effect=[1, 2, 3, 4]) ) def test_retry_continue_on(self): retry_wb = """--- version: '2.0' name: wb workflows: wf1: tasks: task1: action: std.echo output="mocked result" retry: count: 4 delay: 1 continue-on: <% task(task1).result < 3 %> """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_success(task_ex.id) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( 2, task_ex.runtime_context['retry_task_policy']['retry_no'] ) def test_retry_continue_on_not_happened(self): retry_wb = """--- version: '2.0' name: wb workflows: wf1: tasks: task1: action: std.echo output=4 retry: count: 4 delay: 1 continue-on: <% task(task1).result <= 3 %> """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_success(task_ex.id) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( {}, task_ex.runtime_context['retry_task_policy'] ) def test_retry_policy_one_line(self): retry_wb = """--- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.fail retry: count=3 delay=1 """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( 3, task_ex.runtime_context['retry_task_policy']['retry_no'] ) def test_retry_policy_subworkflow_force_fail(self): retry_wb = """--- version: '2.0' name: wb workflows: main: tasks: task1: workflow: work retry: count: 3 delay: 1 work: tasks: do: action: std.fail on-error: - fail """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.main') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual( 3, task_ex.runtime_context['retry_task_policy']['retry_no'] ) @mock.patch.object( std_actions.EchoAction, 'run', mock.Mock(side_effect=[exc.ActionException(), "mocked result"]) ) def test_retry_policy_succeed_after_failure(self): retry_wb = """--- version: '2.0' name: wb workflows: wf1: output: result: <% task(task1).result %> tasks: task1: action: std.echo output="mocked result" retry: count: 3 delay: 1 """ wb_service.create_workbook_v2(retry_wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.await_task_success(task_ex.id) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output task_ex = wf_ex.task_executions[0] self.assertDictEqual( {'retry_no': 1}, task_ex.runtime_context['retry_task_policy'] ) self.assertDictEqual({'result': 'mocked result'}, wf_output) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock(side_effect=[exc.ActionException(), 'value']) ) def test_retry_policy_succeed_after_failure_with_publish(self): retry_wf = """--- version: '2.0' wf1: output: result: <% task(task2).result %> tasks: task1: action: std.noop publish: key: value on-success: - task2 task2: action: std.echo output=<% $.key %> retry: count: 3 delay: 1 """ wf_service.create_workflows(retry_wf) wf_ex = self.engine.start_workflow('wf1') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output task_execs = wf_ex.task_executions retry_task = self._assert_single_item(task_execs, name='task2') self.assertDictEqual( {'retry_no': 1}, retry_task.runtime_context['retry_task_policy'] ) self.assertDictEqual({'result': 'value'}, wf_output) def test_timeout_policy(self): wb_service.create_workbook_v2(TIMEOUT_WB % 2) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.await_task_error(task_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self._assert_single_item(task_execs, name='task1') self.await_workflow_success(wf_ex.id) def test_timeout_policy_zero_seconds(self): wb = """--- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" timeout: 0 """ wb_service.create_workbook_v2(wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.await_task_success(task_ex.id) self.await_workflow_success(wf_ex.id) def test_timeout_policy_negative_number(self): # Negative timeout is not accepted. self.assertRaises( exc.InvalidModelException, wb_service.create_workbook_v2, TIMEOUT_WB % -1 ) def test_timeout_policy_success_after_timeout(self): wb_service.create_workbook_v2(TIMEOUT_WB2) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) # Wait until timeout exceeds. self._sleep(1) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions # Make sure that engine did not create extra tasks. self.assertEqual(1, len(task_execs)) def test_timeout_policy_from_var(self): wb_service.create_workbook_v2(TIMEOUT_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1', wf_input={'timeout': 1}) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.await_task_error(task_ex.id) self.await_workflow_error(wf_ex.id) def test_timeout_policy_from_var_zero_seconds(self): wb = """--- version: '2.0' name: wb workflows: wf1: type: direct input: - timeout tasks: task1: action: std.echo output="Hi!" timeout: <% $.timeout %> """ wb_service.create_workbook_v2(wb) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1', wf_input={'timeout': 0}) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.RUNNING, task_ex.state) self.await_task_success(task_ex.id) self.await_workflow_success(wf_ex.id) def test_timeout_policy_from_var_negative_number(self): wb_service.create_workbook_v2(TIMEOUT_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1', wf_input={'timeout': -1}) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.ERROR, task_ex.state) self.await_workflow_error(wf_ex.id) def test_action_timeout(self): wb = """--- version: '2.0' wf1: tasks: task1: action: std.sleep seconds=10 timeout: 2 """ wf_service.create_workflows(wb) wf_ex = self.engine.start_workflow('wf1') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] action_ex = task_ex.action_executions[0] with timeout.Timeout(8): self.await_workflow_error(wf_ex.id) self.await_task_error(task_ex.id) self.await_action_error(action_ex.id) def test_pause_before_policy(self): wb_service.create_workbook_v2(PAUSE_BEFORE_WB) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.IDLE, task_ex.state) self.await_workflow_paused(wf_ex.id) self._sleep(1) self.engine.resume_workflow(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self._assert_single_item(task_execs, name='task1') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') next_task_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(states.SUCCESS, task_ex.state) self.assertEqual(states.SUCCESS, next_task_ex.state) def test_pause_before_with_delay_policy(self): wb_service.create_workbook_v2(PAUSE_BEFORE_DELAY_WB) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.IDLE, task_ex.state) # Verify wf paused by pause-before self.await_workflow_paused(wf_ex.id) # Allow wait-before to expire self._sleep(2) wf_ex = db_api.get_workflow_execution(wf_ex.id) # Verify wf still paused (wait-before didn't reactivate) self.await_workflow_paused(wf_ex.id) task_ex = db_api.get_task_execution(task_ex.id) self.assertEqual(states.IDLE, task_ex.state) self.engine.resume_workflow(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self._assert_single_item(task_execs, name='task1') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') next_task_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(states.SUCCESS, task_ex.state) self.assertEqual(states.SUCCESS, next_task_ex.state) def test_concurrency_is_in_runtime_context(self): wb_service.create_workbook_v2(CONCURRENCY_WB % 4) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.SUCCESS, task_ex.state) self.assertEqual(4, task_ex.runtime_context['concurrency']) def test_concurrency_is_in_runtime_context_zero_value(self): wb_service.create_workbook_v2(CONCURRENCY_WB % 0) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.SUCCESS, task_ex.state) self.assertNotIn('concurrency', task_ex.runtime_context) def test_concurrency_is_in_runtime_context_negative_number(self): # Negative concurrency value is not accepted. self.assertRaises( exc.InvalidModelException, wb_service.create_workbook_v2, CONCURRENCY_WB % -1 ) def test_concurrency_is_in_runtime_context_from_var(self): wb_service.create_workbook_v2(CONCURRENCY_WB_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'concurrency': 4} ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(4, task_ex.runtime_context['concurrency']) def test_concurrency_is_in_runtime_context_from_var_zero_value(self): wb_service.create_workbook_v2(CONCURRENCY_WB_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'concurrency': 0} ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='task1') self.assertNotIn('concurrency', task_ex.runtime_context) def test_concurrency_is_in_runtime_context_from_var_negative_number(self): wb_service.create_workbook_v2(CONCURRENCY_WB_FROM_VAR) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'concurrency': -1} ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.ERROR, task_ex.state) self.await_workflow_error(wf_ex.id) def test_wrong_policy_prop_type(self): wb = """--- version: "2.0" name: wb workflows: wf1: type: direct input: - wait_before tasks: task1: action: std.echo output="Hi!" wait-before: <% $.wait_before %> """ wb_service.create_workbook_v2(wb) # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf1', wf_input={'wait_before': '1'} ) self.assertIn( 'Invalid data type in WaitBeforePolicy', wf_ex.state_info ) self.assertEqual(states.ERROR, wf_ex.state) def test_delayed_task_and_correct_finish_workflow(self): wf_delayed_state = """--- version: "2.0" wf: type: direct tasks: task1: action: std.noop wait-before: 1 task2: action: std.noop """ wf_service.create_workflows(wf_delayed_state) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(2, len(wf_ex.task_executions)) @mock.patch('mistral.actions.std_actions.EchoAction.run') def test_retry_policy_break_on_with_dict(self, run_method): run_method.return_value = types.Result(error={'key-1': 15}) wf_retry_break_on_with_dictionary = """--- version: '2.0' name: wb workflows: wf1: tasks: fail_task: action: std.echo output='mock' retry: count: 3 delay: 1 break-on: <% task().result['key-1'] = 15 %> """ wb_service.create_workbook_v2(wf_retry_break_on_with_dictionary) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) fail_task_ex = wf_ex.task_executions[0] self.assertEqual(states.ERROR, fail_task_ex.state) self.assertEqual( {}, fail_task_ex.runtime_context["retry_task_policy"] ) mistral-6.0.0/mistral/tests/unit/engine/test_run_action.py0000666000175100017510000002477713245513262024055 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.services import actions from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as ml_actions # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class RunActionEngineTest(base.EngineTestCase): @classmethod def heavy_init(cls): super(RunActionEngineTest, cls).heavy_init() action = """--- version: '2.0' concat: base: std.echo base-input: output: <% $.left %><% $.right %> input: - left - right concat3: base: concat base-input: left: <% $.left %><% $.center %> right: <% $.right %> input: - left - center - right concat4: base: concat3 base-input: left: <% $.left %> center: <% $.center_left %><% $.center_right %> right: <% $.right %> input: - left - center_left - center_right - right missing_base: base: wrong input: - some_input nested_missing_base: base: missing_base input: - some_input loop_action: base: loop_action base-input: output: <% $.output %> input: - output level2_loop_action: base: loop_action base-input: output: <% $.output %> input: - output """ actions.create_actions(action) def test_run_action_sync(self): # Start action and see the result. action_ex = self.engine.start_action('std.echo', {'output': 'Hello!'}) self.assertEqual('Hello!', action_ex.output['result']) self.assertEqual(states.SUCCESS, action_ex.state) @mock.patch.object( std_actions.EchoAction, 'run', mock.Mock(side_effect=exc.ActionException("some error")) ) def test_run_action_error(self): # Start action and see the result. action_ex = self.engine.start_action('std.echo', {'output': 'Hello!'}) self.assertIsNotNone(action_ex.output) self.assertIn('some error', action_ex.output['result']) self.assertEqual(states.ERROR, action_ex.state) def test_run_action_save_result(self): # Start action. action_ex = self.engine.start_action( 'std.echo', {'output': 'Hello!'}, save_result=True ) self.await_action_success(action_ex.id) with db_api.transaction(): action_ex = db_api.get_action_execution(action_ex.id) self.assertEqual(states.SUCCESS, action_ex.state) self.assertEqual({'result': 'Hello!'}, action_ex.output) def test_run_action_run_sync(self): # Start action. action_ex = self.engine.start_action( 'std.echo', {'output': 'Hello!'}, run_sync=True ) self.assertEqual('Hello!', action_ex.output['result']) self.assertEqual(states.SUCCESS, action_ex.state) def test_run_action_save_result_and_run_sync(self): # Start action. action_ex = self.engine.start_action( 'std.echo', {'output': 'Hello!'}, save_result=True, run_sync=True ) self.assertEqual('Hello!', action_ex.output['result']) self.assertEqual(states.SUCCESS, action_ex.state) with db_api.transaction(): action_ex = db_api.get_action_execution(action_ex.id) self.assertEqual(states.SUCCESS, action_ex.state) self.assertEqual({'result': 'Hello!'}, action_ex.output) def test_run_action_run_sync_error(self): # Start action. self.assertRaises( exc.InputException, self.engine.start_action, 'std.async_noop', {}, run_sync=True ) def test_run_action_async(self): action_ex = self.engine.start_action('std.async_noop', {}) self.await_action_state(action_ex.id, states.RUNNING) action_ex = db_api.get_action_execution(action_ex.id) self.assertEqual(states.RUNNING, action_ex.state) @mock.patch.object( std_actions.AsyncNoOpAction, 'run', mock.MagicMock(side_effect=exc.ActionException('Invoke failed.'))) def test_run_action_async_invoke_failure(self): action_ex = self.engine.start_action('std.async_noop', {}) self.await_action_error(action_ex.id) with db_api.transaction(): action_ex = db_api.get_action_execution(action_ex.id) self.assertEqual(states.ERROR, action_ex.state) self.assertIn('Invoke failed.', action_ex.output.get('result', '')) @mock.patch.object( std_actions.AsyncNoOpAction, 'run', mock.MagicMock(return_value=ml_actions.Result(error='Invoke erred.'))) def test_run_action_async_invoke_with_error(self): action_ex = self.engine.start_action('std.async_noop', {}) self.await_action_error(action_ex.id) with db_api.transaction(): action_ex = db_api.get_action_execution(action_ex.id) self.assertEqual(states.ERROR, action_ex.state) self.assertIn('Invoke erred.', action_ex.output.get('result', '')) def test_run_action_adhoc(self): # Start action and see the result. action_ex = self.engine.start_action( 'concat', {'left': 'Hello, ', 'right': 'John Doe!'} ) self.assertEqual('Hello, John Doe!', action_ex.output['result']) def test_run_level_two_action_adhoc(self): # Start action and see the result. action_ex = self.engine.start_action( 'concat3', {'left': 'Hello, ', 'center': 'John', 'right': ' Doe!'} ) self.assertEqual('Hello, John Doe!', action_ex.output['result']) def test_run_level_three_action_adhoc(self): # Start action and see the result. action_ex = self.engine.start_action( 'concat4', { 'left': 'Hello, ', 'center_left': 'John', 'center_right': ' Doe', 'right': '!' } ) self.assertEqual('Hello, John Doe!', action_ex.output['result']) def test_run_action_with_missing_base(self): # Start action and see the result. self.assertRaises( exc.InvalidActionException, self.engine.start_action, 'missing_base', {'some_input': 'Hi'} ) def test_run_action_with_missing_nested_base(self): # Start action and see the result. self.assertRaises( exc.InvalidActionException, self.engine.start_action, 'nested_missing_base', {'some_input': 'Hi'} ) def test_run_loop_action(self): # Start action and see the result. self.assertRaises( ValueError, self.engine.start_action, 'loop_action', {'output': 'Hello'} ) def test_run_level_two_loop_action(self): # Start action and see the result. self.assertRaises( ValueError, self.engine.start_action, 'level2_loop_action', {'output': 'Hello'} ) def test_run_action_wrong_input(self): # Start action and see the result. exception = self.assertRaises( exc.InputException, self.engine.start_action, 'std.http', {'url': 'Hello, ', 'metod': 'John Doe!'} ) self.assertIn('std.http', str(exception)) def test_adhoc_action_wrong_input(self): # Start action and see the result. exception = self.assertRaises( exc.InputException, self.engine.start_action, 'concat', {'left': 'Hello, ', 'ri': 'John Doe!'} ) self.assertIn('concat', str(exception)) # TODO(rakhmerov): This is an example of a bad test. It pins to # implementation details too much and prevents from making refactoring # easily. When writing tests we should make assertions about # consequences, not about how internal machinery works, i.e. we need to # follow "black box" testing paradigm. @mock.patch('mistral.engine.actions.resolve_action_definition') @mock.patch('mistral.engine.utils.validate_input') @mock.patch('mistral.services.action_manager.get_action_class') @mock.patch('mistral.engine.actions.PythonAction.run') def test_run_action_with_kwargs_input(self, run_mock, class_mock, validate_mock, def_mock): action_def = models.ActionDefinition() action_def.update({ 'name': 'fake_action', 'action_class': '', 'attributes': {}, 'description': '', 'input': '**kwargs', 'is_system': True, 'scope': 'public' }) def_mock.return_value = action_def run_mock.return_value = ml_actions.Result(data='Hello') class_ret = mock.MagicMock() class_mock.return_value = class_ret self.engine.start_action('fake_action', {'input': 'Hello'}) self.assertEqual(1, def_mock.call_count) def_mock.assert_called_with('fake_action') self.assertEqual(0, validate_mock.call_count) class_ret.assert_called_once_with(input='Hello') run_mock.assert_called_once_with( {'input': 'Hello'}, None, save=False, timeout=None ) mistral-6.0.0/mistral/tests/unit/engine/base.py0000666000175100017510000002336513245513272021560 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet from oslo_config import cfg from oslo_log import log as logging import oslo_messaging as messaging from oslo_service import service from mistral.db.v2 import api as db_api from mistral.engine import engine_server from mistral.executors import base as exe from mistral.executors import executor_server from mistral.rpc import base as rpc_base from mistral.rpc import clients as rpc_clients from mistral.tests.unit import base from mistral.workflow import states LOG = logging.getLogger(__name__) # Default delay and timeout in seconds for await_xxx() functions. DEFAULT_DELAY = 1 DEFAULT_TIMEOUT = 30 def launch_service(s): launcher = service.ServiceLauncher(cfg.CONF) launcher.launch_service(s) launcher.wait() class EngineTestCase(base.DbTestCase): def setUp(self): super(EngineTestCase, self).setUp() # Get transport here to let oslo.messaging setup default config # before changing the rpc_backend to the fake driver; otherwise, # oslo.messaging will throw exception. messaging.get_transport(cfg.CONF) # Set the transport to 'fake' for Engine tests. cfg.CONF.set_default('rpc_backend', 'fake') # Drop all RPC objects (transport, clients). rpc_base.cleanup() rpc_clients.cleanup() exe.cleanup() self.threads = [] # Start remote executor. if cfg.CONF.executor.type == 'remote': LOG.info("Starting remote executor threads...") self.executor_client = rpc_clients.get_executor_client() exe_svc = executor_server.get_oslo_service(setup_profiler=False) self.executor = exe_svc.executor self.threads.append(eventlet.spawn(launch_service, exe_svc)) self.addCleanup(exe_svc.stop, True) # Start engine. LOG.info("Starting engine threads...") self.engine_client = rpc_clients.get_engine_client() eng_svc = engine_server.get_oslo_service(setup_profiler=False) self.engine = eng_svc.engine self.threads.append(eventlet.spawn(launch_service, eng_svc)) self.addCleanup(eng_svc.stop, True) self.addOnException(self.print_executions) self.addCleanup(self.kill_threads) # Make sure that both services fully started, otherwise # the test may run too early. if cfg.CONF.executor.type == 'remote': exe_svc.wait_started() eng_svc.wait_started() def kill_threads(self): LOG.info("Finishing engine and executor threads...") for thread in self.threads: thread.kill() @staticmethod def print_executions(exc_info=None): if exc_info: print("\nEngine test case exception occurred: %s" % exc_info[1]) print("Exception type: %s" % exc_info[0]) print("\nPrinting workflow executions...") with db_api.transaction(): wf_execs = db_api.get_workflow_executions() for w in wf_execs: print( "\n%s (%s) [state=%s, state_info=%s, output=%s]" % (w.name, w.id, w.state, w.state_info, w.output) ) for t in w.task_executions: print( "\t%s [id=%s, state=%s, state_info=%s, processed=%s," " published=%s]" % (t.name, t.id, t.state, t.state_info, t.processed, t.published) ) child_execs = t.executions for a in child_execs: print( "\t\t%s [id=%s, state=%s, state_info=%s," " accepted=%s, output=%s]" % (a.name, a.id, a.state, a.state_info, a.accepted, a.output) ) print("\nPrinting standalone action executions...") child_execs = db_api.get_action_executions(task_execution_id=None) for a in child_execs: print( "\t\t%s [id=%s, state=%s, state_info=%s, accepted=%s," " output=%s]" % (a.name, a.id, a.state, a.state_info, a.accepted, a.output) ) # Various methods for action execution objects. def is_action_in_state(self, ex_id, state): return db_api.get_action_execution(ex_id).state == state def await_action_state(self, ex_id, state, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self._await( lambda: self.is_action_in_state(ex_id, state), delay, timeout ) def is_action_success(self, ex_id): return self.is_action_in_state(ex_id, states.SUCCESS) def is_action_error(self, ex_id): return self.is_action_in_state(ex_id, states.ERROR) def await_action_success(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_action_state(ex_id, states.SUCCESS, delay, timeout) def await_action_error(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_action_state(ex_id, states.ERROR, delay, timeout) # Various methods for task execution objects. def is_task_in_state(self, ex_id, state): return db_api.get_task_execution(ex_id).state == state def await_task_state(self, ex_id, state, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self._await( lambda: self.is_task_in_state(ex_id, state), delay, timeout ) def is_task_success(self, task_ex_id): return self.is_task_in_state(task_ex_id, states.SUCCESS) def is_task_error(self, task_ex_id): return self.is_task_in_state(task_ex_id, states.ERROR) def is_task_delayed(self, task_ex_id): return self.is_task_in_state(task_ex_id, states.RUNNING_DELAYED) def is_task_processed(self, task_ex_id): return db_api.get_task_execution(task_ex_id).processed def await_task_running(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_task_state(ex_id, states.RUNNING, delay, timeout) def await_task_success(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_task_state(ex_id, states.SUCCESS, delay, timeout) def await_task_error(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_task_state(ex_id, states.ERROR, delay, timeout) def await_task_cancelled(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_task_state(ex_id, states.CANCELLED, delay, timeout) def await_task_paused(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_task_state(ex_id, states.PAUSED, delay, timeout) def await_task_delayed(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_task_state(ex_id, states.RUNNING_DELAYED, delay, timeout) def await_task_processed(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self._await(lambda: self.is_task_processed(ex_id), delay, timeout) # Various methods for workflow execution objects. def is_workflow_in_state(self, ex_id, state): return db_api.get_workflow_execution(ex_id).state == state def await_workflow_state(self, ex_id, state, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self._await( lambda: self.is_workflow_in_state(ex_id, state), delay, timeout, "Execution {} to reach {} state".format(ex_id, state) ) def await_workflow_running(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_workflow_state(ex_id, states.RUNNING, delay, timeout) def await_workflow_success(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_workflow_state(ex_id, states.SUCCESS, delay, timeout) def await_workflow_error(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_workflow_state(ex_id, states.ERROR, delay, timeout) def await_workflow_paused(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_workflow_state(ex_id, states.PAUSED, delay, timeout) def await_workflow_cancelled(self, ex_id, delay=DEFAULT_DELAY, timeout=DEFAULT_TIMEOUT): self.await_workflow_state(ex_id, states.CANCELLED, delay, timeout) mistral-6.0.0/mistral/tests/unit/engine/test_environment.py0000666000175100017510000002335613245513262024250 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.executors import default_executor as d_exe from mistral.executors import remote_executor as r_exe from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') TARGET = '10.1.15.251' WORKBOOK = """ --- version: '2.0' name: my_wb workflows: wf1: type: reverse input: - param1 - param2 output: final_result: <% $.final_result %> tasks: task1: action: std.echo output=<% $.param1 %> target: <% env().var1 %> publish: result1: <% task(task1).result %> task2: requires: [task1] action: std.echo output="'<% $.result1 %> & <% $.param2 %>'" target: <% env().var1 %> publish: final_result: <% task(task2).result %> wf2: output: slogan: <% $.slogan %> tasks: task1: workflow: wf1 input: param1: <% env().var2 %> param2: <% env().var3 %> task_name: task2 publish: slogan: > <% task(task1).result.final_result %> is a cool <% env().var4 %>! """ def _run_at_target(action_ex_id, action_cls_str, action_cls_attrs, params, safe_rerun, execution_context, target=None, async_=True, timeout=None): # We'll just call executor directly for testing purposes. executor = d_exe.DefaultExecutor() executor.run_action( action_ex_id, action_cls_str, action_cls_attrs, params, safe_rerun, execution_context=execution_context, target=target, async_=async_, timeout=timeout ) MOCK_RUN_AT_TARGET = mock.MagicMock(side_effect=_run_at_target) class EnvironmentTest(base.EngineTestCase): def setUp(self): super(EnvironmentTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK) @mock.patch.object(r_exe.RemoteExecutor, 'run_action', MOCK_RUN_AT_TARGET) def _test_subworkflow(self, env): wf2_ex = self.engine.start_workflow('my_wb.wf2', env=env) # Execution of 'wf2'. self.assertIsNotNone(wf2_ex) self.assertDictEqual({}, wf2_ex.input) self.assertDictContainsSubset({'env': env}, wf2_ex.params) self._await(lambda: len(db_api.get_workflow_executions()) == 2, 0.5, 5) wf_execs = db_api.get_workflow_executions() self.assertEqual(2, len(wf_execs)) # Execution of 'wf1'. wf2_ex = self._assert_single_item(wf_execs, name='my_wb.wf2') wf1_ex = self._assert_single_item(wf_execs, name='my_wb.wf1') expected_start_params = { 'task_name': 'task2', 'task_execution_id': wf1_ex.task_execution_id, 'env': env } expected_wf1_input = { 'param1': 'Bonnie', 'param2': 'Clyde' } self.assertIsNotNone(wf1_ex.task_execution_id) self.assertDictContainsSubset(expected_start_params, wf1_ex.params) self.assertDictEqual(wf1_ex.input, expected_wf1_input) # Wait till workflow 'wf1' is completed. self.await_workflow_success(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) self.assertDictEqual( {'final_result': "'Bonnie & Clyde'"}, wf1_ex.output ) # Wait till workflow 'wf2' is completed. self.await_workflow_success(wf2_ex.id) with db_api.transaction(): wf2_ex = db_api.get_workflow_execution(wf2_ex.id) self.assertDictEqual( {'slogan': "'Bonnie & Clyde' is a cool movie!\n"}, wf2_ex.output ) with db_api.transaction(): # Check if target is resolved. wf1_task_execs = db_api.get_task_executions( workflow_execution_id=wf1_ex.id ) self._assert_single_item(wf1_task_execs, name='task1') self._assert_single_item(wf1_task_execs, name='task2') for t_ex in wf1_task_execs: a_ex = t_ex.action_executions[0] callback_url = '/v2/action_executions/%s' % a_ex.id r_exe.RemoteExecutor.run_action.assert_any_call( a_ex.id, 'mistral.actions.std_actions.EchoAction', {}, a_ex.input, False, { 'task_id': t_ex.id, 'callback_url': callback_url, 'workflow_execution_id': wf1_ex.id, 'workflow_name': wf1_ex.name, 'action_execution_id': a_ex.id, }, target=TARGET, timeout=None ) def test_subworkflow_env_task_input(self): env = { 'var1': TARGET, 'var2': 'Bonnie', 'var3': 'Clyde', 'var4': 'movie' } self._test_subworkflow(env) def test_subworkflow_env_recursive(self): env = { 'var1': TARGET, 'var2': 'Bonnie', 'var3': '<% env().var5 %>', 'var4': 'movie', 'var5': 'Clyde' } self._test_subworkflow(env) def test_evaluate_env_parameter(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop publish: var1: <% env().var1 %> var2: <% env().var2 %> """ wf_service.create_workflows(wf_text) env = { "var1": "val1", "var2": "<% env().var1 %>" } # Run with 'evaluate_env' set to True. wf_ex = self.engine.start_workflow( 'wf', env=env, evaluate_env=True ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) t = self._assert_single_item(wf_ex.task_executions, name='task1') self.assertDictEqual( { "var1": "val1", "var2": "val1" }, t.published ) # Run with 'evaluate_env' set to False. wf_ex = self.engine.start_workflow( 'wf', env=env, evaluate_env=False ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) t = self._assert_single_item(wf_ex.task_executions, name='task1') self.assertDictEqual( { "var1": "val1", "var2": "<% env().var1 %>" }, t.published ) def test_evaluate_env_parameter_subworkflow(self): wf_text = """--- version: '2.0' parent_wf: tasks: task1: workflow: sub_wf sub_wf: output: result: <% $.result %> tasks: task1: action: std.noop publish: result: <% env().dummy %> """ wf_service.create_workflows(wf_text) # Run with 'evaluate_env' set to False. env = {"dummy": "<% $.ENSURE.MISTRAL.DOESNT.EVALUATE.ENV %>"} parent_wf_ex = self.engine.start_workflow( 'parent_wf', env=env, evaluate_env=False ) self.await_workflow_success(parent_wf_ex.id) with db_api.transaction(): parent_wf_ex = db_api.get_workflow_execution(parent_wf_ex.id) t = self._assert_single_item( parent_wf_ex.task_executions, name='task1' ) sub_wf_ex = db_api.get_workflow_executions( task_execution_id=t.id )[0] self.assertDictEqual( { "result": "<% $.ENSURE.MISTRAL.DOESNT.EVALUATE.ENV %>" }, sub_wf_ex.output ) # Run with 'evaluate_env' set to True. env = {"dummy": "<% 1 + 1 %>"} parent_wf_ex = self.engine.start_workflow( 'parent_wf', env=env, evaluate_env=True ) self.await_workflow_success(parent_wf_ex.id) with db_api.transaction(): parent_wf_ex = db_api.get_workflow_execution(parent_wf_ex.id) t = self._assert_single_item( parent_wf_ex.task_executions, name='task1' ) sub_wf_ex = db_api.get_workflow_executions( task_execution_id=t.id )[0] self.assertDictEqual( { "result": 2 }, sub_wf_ex.output ) mistral-6.0.0/mistral/tests/unit/engine/test_javascript_action.py0000666000175100017510000000614113245513262025400 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg import testtools from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.utils import javascript from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK = """ --- version: "2.0" name: test_js workflows: js_test: type: direct input: - num tasks: task1: description: | This task reads variable from context, increasing its value 10 times, writes result to context and returns 100 (expected result) action: std.javascript input: script: | return $['num'] * 10 context: <% $ %> publish: result: <% task(task1).result %> """ def fake_evaluate(_, context): return context['num'] * 10 class JavaScriptEngineTest(base.EngineTestCase): @testtools.skip('It requires installed JS engine.') def test_javascript_action(self): wb_service.create_workbook_v2(WORKBOOK) # Start workflow. wf_ex = self.engine.start_workflow( 'test_js.js_test', wf_input={'num': 50} ) self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.SUCCESS, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.assertEqual(500, task_ex.published['num_10_times']) self.assertEqual(100, task_ex.published['result']) @mock.patch.object(javascript, 'evaluate', fake_evaluate) def test_fake_javascript_action_data_context(self): wb_service.create_workbook_v2(WORKBOOK) # Start workflow. wf_ex = self.engine.start_workflow( 'test_js.js_test', wf_input={'num': 50} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.SUCCESS, task_ex.state) self.assertDictEqual({}, task_ex.runtime_context) self.assertEqual(500, task_ex.published['result']) mistral-6.0.0/mistral/tests/unit/engine/test_direct_workflow_rerun.py0000666000175100017510000013352713245513261026324 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg import testtools from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') SIMPLE_WORKBOOK = """ --- version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.echo output="Task 1" on-success: - t2 t2: action: std.echo output="Task 2" on-success: - t3 t3: action: std.echo output="Task 3" """ SIMPLE_WORKBOOK_DIFF_ENV_VAR = """ --- version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t10: action: std.echo output="Task 10" on-success: - t21 - t30 t21: action: std.echo output=<% env().var1 %> on-success: - t22 t22: action: std.echo output="<% env().var2 %>" on-success: - t30 t30: join: all action: std.echo output="<% env().var3 %>" wait-before: 1 """ WITH_ITEMS_WORKBOOK = """ --- version: '2.0' name: wb3 workflows: wf1: type: direct tasks: t1: with-items: i in <% list(range(0, 3)) %> action: std.echo output="Task 1.<% $.i %>" publish: v1: <% task(t1).result %> on-success: - t2 t2: action: std.echo output="Task 2" """ WITH_ITEMS_WORKBOOK_DIFF_ENV_VAR = """ --- version: '2.0' name: wb3 workflows: wf1: type: direct tasks: t1: with-items: i in <% list(range(0, 3)) %> action: std.echo output="Task 1.<% $.i %> [<% env().var1 %>]" publish: v1: <% task(t1).result %> on-success: - t2 t2: action: std.echo output="Task 2" """ WITH_ITEMS_WORKBOOK_CONCURRENCY = """ --- version: '2.0' name: wb3 workflows: wf1: type: direct tasks: t1: with-items: i in <% list(range(0, 4)) %> action: std.echo output="Task 1.<% $.i %>" concurrency: 2 publish: v1: <% task(t1).result %> on-success: - t2 t2: action: std.echo output="Task 2" """ JOIN_WORKBOOK = """ --- version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.echo output="Task 1" on-success: - t3 t2: action: std.echo output="Task 2" on-success: - t3 t3: action: std.echo output="Task 3" join: all """ SUBFLOW_WORKBOOK = """ version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.echo output="Task 1" on-success: - t2 t2: workflow: wf2 on-success: - t3 t3: action: std.echo output="Task 3" wf2: type: direct output: result: <% task(wf2_t1).result %> tasks: wf2_t1: action: std.echo output="Task 2" """ class DirectWorkflowRerunTest(base.EngineTestCase): @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. exc.ActionException(), # Mock task2 exception for initial run. 'Task 2', # Mock task2 success for rerun. 'Task 3' # Mock task3 success. ] ) ) def test_rerun(self): wb_service.create_workbook_v2(SIMPLE_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.ERROR, task_2_ex.state) self.assertIsNotNone(task_2_ex.state_info) # Resume workflow and re-run failed task. self.engine.rerun_workflow(task_2_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) # Wait for the workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertIsNone(task_2_ex.state_info) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(2, len(task_2_action_exs)) # Check there is exactly 1 action in Success and 1 in error state. # Order doesn't matter. self._assert_single_item(task_2_action_exs, state=states.SUCCESS) self._assert_single_item(task_2_action_exs, state=states.ERROR) # Check action executions of task 3. self.assertEqual(states.SUCCESS, task_3_ex.state) task_3_action_exs = db_api.get_action_executions( task_execution_id=task_3_ex.id ) self.assertEqual(1, len(task_3_action_exs)) self.assertEqual(states.SUCCESS, task_3_action_exs[0].state) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 10', # Mock task10 success for first run. exc.ActionException(), # Mock task21 exception for first run. 'Task 21', # Mock task21 success for rerun. 'Task 22', # Mock task22 success. 'Task 30' # Mock task30 success. ] ) ) def test_rerun_diff_env_vars(self): wb_service.create_workbook_v2(SIMPLE_WORKBOOK_DIFF_ENV_VAR) # Initial environment variables for the workflow execution. env = { 'var1': 'fee fi fo fum', 'var2': 'mirror mirror', 'var3': 'heigh-ho heigh-ho' } # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1', env=env) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) self.assertDictEqual(env, wf_ex.params['env']) self.assertDictEqual(env, wf_ex.context['__env']) task_10_ex = self._assert_single_item(task_execs, name='t10') task_21_ex = self._assert_single_item(task_execs, name='t21') task_30_ex = self._assert_single_item(task_execs, name='t30') self.assertEqual(states.SUCCESS, task_10_ex.state) self.assertEqual(states.ERROR, task_21_ex.state) self.assertIsNotNone(task_21_ex.state_info) self.assertEqual(states.ERROR, task_30_ex.state) # Update env in workflow execution with the following. updated_env = { 'var1': 'Task 21', 'var2': 'Task 22', 'var3': 'Task 30' } # Resume workflow and re-run failed task. wf_ex = self.engine.rerun_workflow(task_21_ex.id, env=updated_env) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertDictEqual(updated_env, wf_ex.params['env']) self.assertDictEqual(updated_env, wf_ex.context['__env']) # Await t30 success. self.await_task_success(task_30_ex.id) # Wait for the workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(4, len(task_execs)) task_10_ex = self._assert_single_item(task_execs, name='t10') task_21_ex = self._assert_single_item(task_execs, name='t21') task_22_ex = self._assert_single_item(task_execs, name='t22') task_30_ex = self._assert_single_item(task_execs, name='t30') # Check action executions of task 10. self.assertEqual(states.SUCCESS, task_10_ex.state) task_10_action_exs = db_api.get_action_executions( task_execution_id=task_10_ex.id ) self.assertEqual(1, len(task_10_action_exs)) self.assertEqual(states.SUCCESS, task_10_action_exs[0].state) self.assertDictEqual( {'output': 'Task 10'}, task_10_action_exs[0].input ) # Check action executions of task 21. self.assertEqual(states.SUCCESS, task_21_ex.state) self.assertIsNone(task_21_ex.state_info) task_21_action_exs = db_api.get_action_executions( task_execution_id=task_21_ex.id ) self.assertEqual(2, len(task_21_action_exs)) # Check there is exactly 1 action in Success and 1 in error state. # Order doesn't matter. task_21_action_exs_1 = self._assert_single_item( task_21_action_exs, state=states.ERROR ) task_21_action_exs_2 = self._assert_single_item( task_21_action_exs, state=states.SUCCESS ) self.assertDictEqual( {'output': env['var1']}, task_21_action_exs_1.input ) self.assertDictEqual( {'output': updated_env['var1']}, task_21_action_exs_2.input ) # Check action executions of task 22. self.assertEqual(states.SUCCESS, task_22_ex.state) task_22_action_exs = db_api.get_action_executions( task_execution_id=task_22_ex.id ) self.assertEqual(1, len(task_22_action_exs)) self.assertEqual(states.SUCCESS, task_22_action_exs[0].state) self.assertDictEqual( {'output': updated_env['var2']}, task_22_action_exs[0].input ) # Check action executions of task 30. self.assertEqual(states.SUCCESS, task_30_ex.state) task_30_action_exs = db_api.get_action_executions( task_execution_id=task_30_ex.id ) self.assertEqual(1, len(task_30_action_exs)) self.assertEqual(states.SUCCESS, task_30_action_exs[0].state) self.assertDictEqual( {'output': updated_env['var3']}, task_30_action_exs[0].input ) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. exc.ActionException() # Mock task2 exception for initial run. ] ) ) def test_rerun_from_prev_step(self): wb_service.create_workbook_v2(SIMPLE_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item( task_execs, name='t1', state=states.SUCCESS ) task_2_ex = self._assert_single_item( task_execs, name='t2', state=states.ERROR ) self.assertIsNotNone(task_2_ex.state_info) # Resume workflow and re-run failed task. e = self.assertRaises( exc.MistralError, self.engine.rerun_workflow, task_1_ex.id ) self.assertIn('not supported', str(e)) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.1', # Mock task1 success for initial run. exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.0', # Mock task1 success for rerun. 'Task 1.2', # Mock task1 success for rerun. 'Task 2' # Mock task2 success. ] ) ) def test_rerun_with_items(self): wb_service.create_workbook_v2(WITH_ITEMS_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb3.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') self.assertEqual(states.ERROR, task_1_ex.state) self.assertIsNotNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(3, len(task_1_action_exs)) # Resume workflow and re-run failed task. self.engine.rerun_workflow(task_1_ex.id, reset=False) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertIsNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) # The single action execution that succeeded should not re-run. self.assertEqual(5, len(task_1_action_exs)) self.assertListEqual( ['Task 1.0', 'Task 1.1', 'Task 1.2'], task_1_ex.published.get('v1') ) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(1, len(task_2_action_exs)) @testtools.skip('Restore concurrency support.') @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.1', # Mock task1 success for initial run. exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.3', # Mock task1 success for initial run. 'Task 1.0', # Mock task1 success for rerun. 'Task 1.2', # Mock task1 success for rerun. 'Task 2' # Mock task2 success. ] ) ) def test_rerun_with_items_concurrency(self): wb_service.create_workbook_v2(WITH_ITEMS_WORKBOOK_CONCURRENCY) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb3.wf1') self.await_workflow_error(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(1, len(wf_ex.task_executions)) task_1_ex = self._assert_single_item(wf_ex.task_executions, name='t1') self.assertEqual(states.ERROR, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(4, len(task_1_action_exs)) # Resume workflow and re-run failed task. self.engine.rerun_workflow(task_1_ex.id, reset=False) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.await_workflow_success(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(2, len(wf_ex.task_executions)) task_1_ex = self._assert_single_item(wf_ex.task_executions, name='t1') task_2_ex = self._assert_single_item(wf_ex.task_executions, name='t2') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertIsNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) # The action executions that succeeded should not re-run. self.assertEqual(6, len(task_1_action_exs)) self.assertListEqual(['Task 1.0', 'Task 1.1', 'Task 1.2', 'Task 1.3'], task_1_ex.published.get('v1')) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(1, len(task_2_action_exs)) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.1', # Mock task1 success for initial run. exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.0', # Mock task1 success for rerun. 'Task 1.2', # Mock task1 success for rerun. 'Task 2' # Mock task2 success. ] ) ) def test_rerun_with_items_diff_env_vars(self): wb_service.create_workbook_v2(WITH_ITEMS_WORKBOOK_DIFF_ENV_VAR) # Initial environment variables for the workflow execution. env = {'var1': 'fee fi fo fum'} # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb3.wf1', env=env) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') self.assertEqual(states.ERROR, task_1_ex.state) self.assertIsNotNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(3, len(task_1_action_exs)) # Update env in workflow execution with the following. updated_env = {'var1': 'foobar'} # Resume workflow and re-run failed task. self.engine.rerun_workflow( task_1_ex.id, reset=False, env=updated_env ) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertIsNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) expected_inputs = [ 'Task 1.0 [%s]' % env['var1'], # Task 1 item 0 (error). 'Task 1.1 [%s]' % env['var1'], # Task 1 item 1. 'Task 1.2 [%s]' % env['var1'], # Task 1 item 2 (error). 'Task 1.0 [%s]' % updated_env['var1'], # Task 1 item 0 (rerun). 'Task 1.2 [%s]' % updated_env['var1'] # Task 1 item 2 (rerun). ] # Assert that every expected input is in actual task input. for action_ex in task_1_action_exs: self.assertIn(action_ex.input['output'], expected_inputs) # Assert that there was same number of unique inputs as action execs. self.assertEqual( len(task_1_action_exs), len(set( [action_ex.input['output'] for action_ex in task_1_action_exs] )) ) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(1, len(task_2_action_exs)) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. 'Task 2', # Mock task2 success for initial run. exc.ActionException(), # Mock task3 exception for initial run. 'Task 3' # Mock task3 success for rerun. ] ) ) def test_rerun_on_join_task(self): wb_service.create_workbook_v2(JOIN_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1') wf_ex = db_api.get_workflow_execution(wf_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertEqual(states.ERROR, task_3_ex.state) self.assertIsNotNone(task_3_ex.state_info) # Resume workflow and re-run failed task. self.engine.rerun_workflow(task_3_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) # Wait for the workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(1, len(task_2_action_exs)) self.assertEqual(states.SUCCESS, task_2_action_exs[0].state) # Check action executions of task 3. self.assertEqual(states.SUCCESS, task_3_ex.state) self.assertIsNone(task_3_ex.state_info) task_3_action_exs = db_api.get_action_executions( task_execution_id=task_3_ex.id ) self.assertEqual(2, len(task_3_action_exs)) # Check there is exactly 1 action in Success and 1 in error state. # Order doesn't matter. self._assert_single_item(task_3_action_exs, state=states.SUCCESS) self._assert_single_item(task_3_action_exs, state=states.ERROR) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ exc.ActionException(), # Mock task1 exception for initial run. exc.ActionException(), # Mock task2 exception for initial run. 'Task 1', # Mock task1 success for rerun. 'Task 2', # Mock task2 success for rerun. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_join_with_branch_errors(self): wb_service.create_workbook_v2(JOIN_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.await_task_error(task_1_ex.id) self.await_task_error(task_2_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') self.assertEqual(states.ERROR, task_1_ex.state) self.assertIsNotNone(task_1_ex.state_info) task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.ERROR, task_2_ex.state) self.assertIsNotNone(task_2_ex.state_info) # Resume workflow and re-run failed task. wf_ex = self.engine.rerun_workflow(task_1_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions # Wait for the task to succeed. task_1_ex = self._assert_single_item(task_execs, name='t1') self.await_task_success(task_1_ex.id) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.ERROR, task_2_ex.state) self.assertEqual(states.ERROR, task_3_ex.state) # Resume workflow and re-run failed task. wf_ex = self.engine.rerun_workflow(task_2_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) # Join now should finally complete. self.await_task_success(task_3_ex.id) # Wait for the workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertIsNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(2, len(task_1_action_exs)) # Check there is exactly 1 action in Success and 1 in error state. # Order doesn't matter. self._assert_single_item(task_1_action_exs, state=states.SUCCESS) self._assert_single_item(task_1_action_exs, state=states.ERROR) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertIsNone(task_2_ex.state_info) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(2, len(task_2_action_exs)) # Check there is exactly 1 action in Success and 1 in error state. # Order doesn't matter. self._assert_single_item(task_2_action_exs, state=states.SUCCESS) self._assert_single_item(task_2_action_exs, state=states.ERROR) # Check action executions of task 3. self.assertEqual(states.SUCCESS, task_3_ex.state) task_3_action_exs = db_api.get_action_executions( task_execution_id=task_3_ex.id ) self.assertEqual(1, len(task_3_action_exs)) self.assertEqual(states.SUCCESS, task_3_action_exs[0].state) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ exc.ActionException(), # Mock task 1.0 error for run. 'Task 1.1', # Mock task 1.1 success for run. exc.ActionException(), # Mock task 1.2 error for run. exc.ActionException(), # Mock task 1.0 error for 1st rerun. exc.ActionException(), # Mock task 1.2 error for 1st rerun. exc.ActionException(), # Mock task 1.0 error for 2nd run. 'Task 1.1', # Mock task 1.1 success for 2nd run. exc.ActionException(), # Mock task 1.2 error for 2nd run. exc.ActionException(), # Mock task 1.0 error for 3rd rerun. exc.ActionException(), # Mock task 1.2 error for 3rd rerun. 'Task 1.0', # Mock task 1.0 success for 4th rerun. 'Task 1.2', # Mock task 1.2 success for 4th rerun. 'Task 2' # Mock task 2 success. ] ) ) def test_multiple_reruns_with_items(self): wb_service.create_workbook_v2(WITH_ITEMS_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb3.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') self.await_task_error(task_1_ex.id) self.assertIsNotNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(3, len(task_1_action_exs)) # Resume workflow and re-run failed task. Re-run #1 with no reset. wf_ex = self.engine.rerun_workflow(task_1_ex.id, reset=False) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.await_workflow_error(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(5, len(task_1_action_exs)) # Resume workflow and re-run failed task. Re-run #2 with reset. self.engine.rerun_workflow(task_1_ex.id, reset=True) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.await_workflow_error(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(8, len(task_1_action_exs)) # Resume workflow and re-run failed task. Re-run #3 with no reset. self.engine.rerun_workflow(task_1_ex.id, reset=False) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.await_workflow_error(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(10, len(task_1_action_exs)) # Resume workflow and re-run failed task. Re-run #4 with no reset. self.engine.rerun_workflow(task_1_ex.id, reset=False) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertIsNone(task_1_ex.state_info) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) # The single action execution that succeeded should not re-run. self.assertEqual(12, len(task_1_action_exs)) self.assertListEqual( ['Task 1.0', 'Task 1.1', 'Task 1.2'], task_1_ex.published.get('v1') ) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(1, len(task_2_action_exs)) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. exc.ActionException(), # Mock task2 exception for initial run. 'Task 2', # Mock task2 success for rerun. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_subflow(self): wb_service.create_workbook_v2(SUBFLOW_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.ERROR, task_2_ex.state) self.assertIsNotNone(task_2_ex.state_info) # Resume workflow and re-run failed task. self.engine.rerun_workflow(task_2_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) # Wait for the workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertIsNone(task_2_ex.state_info) task_2_action_exs = db_api.get_workflow_executions( task_execution_id=task_2_ex.id ) self.assertEqual(2, len(task_2_action_exs)) # Check there is exactly 1 action in Success and 1 in error state. # Order doesn't matter. self._assert_single_item(task_2_action_exs, state=states.SUCCESS) self._assert_single_item(task_2_action_exs, state=states.ERROR) # Check action executions of task 3. self.assertEqual(states.SUCCESS, task_3_ex.state) task_3_action_exs = db_api.get_action_executions( task_execution_id=task_3_ex.id ) self.assertEqual(1, len(task_3_action_exs)) self.assertEqual(states.SUCCESS, task_3_action_exs[0].state) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. exc.ActionException(), # Mock task2 exception for initial run. 'Task 2', # Mock task2 success for rerun. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_subflow_task(self): wb_service.create_workbook_v2(SUBFLOW_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.ERROR, task_2_ex.state) self.assertIsNotNone(task_2_ex.state_info) with db_api.transaction(): # Get subworkflow and related task sub_wf_exs = db_api.get_workflow_executions( task_execution_id=task_2_ex.id ) sub_wf_ex = sub_wf_exs[0] sub_wf_task_execs = sub_wf_ex.task_executions self.assertEqual(states.ERROR, sub_wf_ex.state) self.assertIsNotNone(sub_wf_ex.state_info) self.assertEqual(1, len(sub_wf_task_execs)) sub_wf_task_ex = self._assert_single_item( sub_wf_task_execs, name='wf2_t1' ) self.assertEqual(states.ERROR, sub_wf_task_ex.state) self.assertIsNotNone(sub_wf_task_ex.state_info) # Resume workflow and re-run failed subworkflow task. self.engine.rerun_workflow(sub_wf_task_ex.id) sub_wf_ex = db_api.get_workflow_execution(sub_wf_ex.id) self.assertEqual(states.RUNNING, sub_wf_ex.state) self.assertIsNone(sub_wf_ex.state_info) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) # Wait for the subworkflow to succeed. self.await_workflow_success(sub_wf_ex.id) with db_api.transaction(): sub_wf_ex = db_api.get_workflow_execution(sub_wf_ex.id) sub_wf_task_execs = sub_wf_ex.task_executions self.assertEqual(states.SUCCESS, sub_wf_ex.state) self.assertIsNone(sub_wf_ex.state_info) self.assertEqual(1, len(sub_wf_task_execs)) sub_wf_task_ex = self._assert_single_item( sub_wf_task_execs, name='wf2_t1' ) # Check action executions of subworkflow task. self.assertEqual(states.SUCCESS, sub_wf_task_ex.state) self.assertIsNone(sub_wf_task_ex.state_info) sub_wf_task_ex_action_exs = db_api.get_action_executions( task_execution_id=sub_wf_task_ex.id ) self.assertEqual(2, len(sub_wf_task_ex_action_exs)) # Check there is exactly 1 action in Success and 1 in error state. # Order doesn't matter. self._assert_single_item( sub_wf_task_ex_action_exs, state=states.SUCCESS ) self._assert_single_item(sub_wf_task_ex_action_exs, state=states.ERROR) # Wait for the main workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertIsNone(task_2_ex.state_info) task_2_action_exs = db_api.get_workflow_executions( task_execution_id=task_2_ex.id ) self.assertEqual(1, len(task_2_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # Check action executions of task 3. self.assertEqual(states.SUCCESS, task_3_ex.state) task_3_action_exs = db_api.get_action_executions( task_execution_id=task_3_ex.id) self.assertEqual(1, len(task_3_action_exs)) self.assertEqual(states.SUCCESS, task_3_action_exs[0].state) mistral-6.0.0/mistral/tests/unit/engine/test_workflow_resume.py0000666000175100017510000003041313245513262025126 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import data_flow from mistral.workflow import states from mistral_lib import actions as ml_actions # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') RESUME_WORKBOOK = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" on-complete: - task2 - pause task2: action: std.echo output="Task 2" """ RESUME_WORKBOOK_DIFF_ENV_VAR = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" on-complete: - task2 task2: action: std.echo output=<% env().var1 %> pause-before: true on-complete: - task3 task3: action: std.echo output=<% env().var2 %> """ RESUME_WORKBOOK_REVERSE = """ --- version: '2.0' name: resume_reverse workflows: wf: type: reverse tasks: task1: action: std.echo output="Hi!" wait-after: 1 task2: action: std.echo output="Task 2" requires: [task1] """ WORKBOOK_TWO_BRANCHES = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" on-complete: - task2 - task3 - pause task2: action: std.echo output="Task 2" task3: action: std.echo output="Task 3" """ WORKBOOK_TWO_START_TASKS = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Task 1" on-complete: - task3 - pause task2: action: std.echo output="Task 2" on-complete: - pause task3: action: std.echo output="Task 3" """ WORKBOOK_DIFFERENT_TASK_STATES = """ --- version: '2.0' name: wb workflows: wf1: type: direct tasks: task1: action: std.echo output="Hi!" on-complete: - task3 - pause task2: action: std.async_noop # This one won't be finished when execution is already PAUSED. on-complete: - task4 task3: action: std.echo output="Task 3" task4: action: std.echo output="Task 4" """ class WorkflowResumeTest(base.EngineTestCase): def setUp(self): super(WorkflowResumeTest, self).setUp() self.wb_spec = spec_parser.get_workbook_spec_from_yaml(RESUME_WORKBOOK) self.wf_spec = self.wb_spec.get_workflows()['wf1'] def test_resume_direct(self): wb_service.create_workbook_v2(RESUME_WORKBOOK) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(2, len(task_execs)) self.engine.resume_workflow(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(2, len(wf_ex.task_executions)) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(2, len(task_execs)) def test_resume_reverse(self): wb_service.create_workbook_v2(RESUME_WORKBOOK_REVERSE) # Start workflow. wf_ex = self.engine.start_workflow( 'resume_reverse.wf', task_name='task2' ) self.engine.pause_workflow(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(1, len(task_execs)) self.engine.resume_workflow(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(2, len(task_execs)) def test_resume_two_branches(self): wb_service.create_workbook_v2(WORKBOOK_TWO_BRANCHES) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(3, len(task_execs)) wf_ex = self.engine.resume_workflow(wf_ex.id) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) # We can see 3 tasks in execution. self.assertEqual(3, len(task_execs)) def test_resume_two_start_tasks(self): wb_service.create_workbook_v2(WORKBOOK_TWO_START_TASKS) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.PAUSED, wf_ex.state) # The exact number of tasks depends on which of two tasks # 'task1' and 'task2' completed earlier. self.assertGreaterEqual(len(task_execs), 2) task1_ex = self._assert_single_item(task_execs, name='task1') task2_ex = self._assert_single_item(task_execs, name='task2') self.await_task_success(task1_ex.id) self.await_task_success(task2_ex.id) self.engine.resume_workflow(wf_ex.id) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(3, len(task_execs)) def test_resume_different_task_states(self): wb_service.create_workbook_v2(WORKBOOK_DIFFERENT_TASK_STATES) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(3, len(task_execs)) task2_ex = self._assert_single_item(task_execs, name='task2') # Task2 is not finished yet. self.assertFalse(states.is_completed(task2_ex.state)) wf_ex = self.engine.resume_workflow(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) # Wait for task3 to be processed. task3_ex = self._assert_single_item(task_execs, name='task3') self.await_task_success(task3_ex.id) self.await_task_processed(task3_ex.id) # Finish task2. task2_action_ex = db_api.get_action_executions( task_execution_id=task2_ex.id )[0] self.engine.on_action_complete(task2_action_ex.id, ml_actions.Result()) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state, wf_ex.state_info) self.assertEqual(4, len(task_execs)) def test_resume_fails(self): # Start and pause workflow. wb_service.create_workbook_v2(WORKBOOK_DIFFERENT_TASK_STATES) wf_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_paused(wf_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.PAUSED, wf_ex.state) # Simulate failure and check if it is handled. err = exc.MistralError('foo') with mock.patch.object( db_api, 'get_workflow_execution', side_effect=err): self.assertRaises( exc.MistralError, self.engine.resume_workflow, wf_ex.id ) def test_resume_diff_env_vars(self): wb_service.create_workbook_v2(RESUME_WORKBOOK_DIFF_ENV_VAR) # Initial environment variables for the workflow execution. env = { 'var1': 'fee fi fo fum', 'var2': 'foobar' } # Start workflow. wf_ex = self.engine.start_workflow('wb.wf1', env=env) self.await_workflow_paused(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') task_2_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(2, len(task_execs)) self.assertDictEqual(env, wf_ex.params['env']) self.assertDictEqual(env, wf_ex.context['__env']) self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.IDLE, task_2_ex.state) # Update env in workflow execution with the following. updated_env = { 'var1': 'Task 2', 'var2': 'Task 3' } # Update the env variables and resume workflow. self.engine.resume_workflow(wf_ex.id, env=updated_env) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertDictEqual(updated_env, wf_ex.params['env']) self.assertDictEqual(updated_env, wf_ex.context['__env']) self.assertEqual(3, len(task_execs)) # Check result of task2. task_2_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(states.SUCCESS, task_2_ex.state) # Re-read task execution, otherwise lazy loading of action executions # may not work. with db_api.transaction(): task_2_ex = db_api.get_task_execution(task_2_ex.id) task_2_result = data_flow.get_task_execution_result(task_2_ex) self.assertEqual(updated_env['var1'], task_2_result) # Check result of task3. task_3_ex = self._assert_single_item( task_execs, name='task3' ) self.assertEqual(states.SUCCESS, task_3_ex.state) # Re-read task execution, otherwise lazy loading of action executions # may not work. with db_api.transaction(): task_3_ex = db_api.get_task_execution(task_3_ex.id) task_3_result = data_flow.get_task_execution_result(task_3_ex) self.assertEqual(updated_env['var2'], task_3_result) mistral-6.0.0/mistral/tests/unit/engine/test_direct_workflow_rerun_cancelled.py0000666000175100017510000005155413245513262030316 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as ml_actions # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class DirectWorkflowRerunCancelledTest(base.EngineTestCase): @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 2', # Mock task2 success. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_cancelled_task(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.async_noop on-success: - t2 t2: action: std.echo output="Task 2" on-success: - t3 t3: action: std.echo output="Task 3" """ wb_service.create_workbook_v2(wb_def) wf1_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_state(wf1_ex.id, states.RUNNING) with db_api.transaction(): wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(1, len(wf1_t1_action_exs)) self.assertEqual(states.RUNNING, wf1_t1_action_exs[0].state) # Cancel action execution for task. self.engine.on_action_complete( wf1_t1_action_exs[0].id, ml_actions.Result(cancel=True) ) self.await_task_cancelled(wf1_t1_ex.id) self.await_workflow_cancelled(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_task_execs = wf1_ex.task_executions wf1_t1_ex = self._assert_single_item(wf1_task_execs, name='t1') self.assertEqual(states.CANCELLED, wf1_ex.state) self.assertEqual("Cancelled tasks: t1", wf1_ex.state_info) self.assertEqual(1, len(wf1_task_execs)) self.assertEqual(states.CANCELLED, wf1_t1_ex.state) self.assertIsNone(wf1_t1_ex.state_info) # Resume workflow and re-run cancelled task. self.engine.rerun_workflow(wf1_t1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_task_execs = wf1_ex.task_executions self.assertEqual(states.RUNNING, wf1_ex.state) self.assertIsNone(wf1_ex.state_info) # Mark async action execution complete. wf1_t1_ex = self._assert_single_item(wf1_task_execs, name='t1') wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(states.RUNNING, wf1_t1_ex.state) self.assertEqual(2, len(wf1_t1_action_exs)) # Check there is exactly 1 action in Running and 1 in Cancelled state. # Order doesn't matter. wf1_t1_aex_running = self._assert_single_item( wf1_t1_action_exs, state=states.RUNNING ) self._assert_single_item(wf1_t1_action_exs, state=states.CANCELLED) self.engine.on_action_complete( wf1_t1_aex_running.id, ml_actions.Result(data={'foo': 'bar'}) ) # Wait for the workflow to succeed. self.await_workflow_success(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_task_execs = wf1_ex.task_executions self.assertEqual(states.SUCCESS, wf1_ex.state) self.assertIsNone(wf1_ex.state_info) self.assertEqual(3, len(wf1_task_execs)) wf1_t1_ex = self._assert_single_item(wf1_task_execs, name='t1') wf1_t2_ex = self._assert_single_item(wf1_task_execs, name='t2') wf1_t3_ex = self._assert_single_item(wf1_task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, wf1_t1_ex.state) self.assertIsNone(wf1_t1_ex.state_info) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(2, len(wf1_t1_action_exs)) # Check there is exactly 1 action in Success and 1 in Cancelled state. # Order doesn't matter. self._assert_single_item(wf1_t1_action_exs, state=states.SUCCESS) self._assert_single_item(wf1_t1_action_exs, state=states.CANCELLED) # Check action executions of task 2. self.assertEqual(states.SUCCESS, wf1_t2_ex.state) wf1_t2_action_exs = db_api.get_action_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(1, len(wf1_t2_action_exs)) self.assertEqual(states.SUCCESS, wf1_t2_action_exs[0].state) # Check action executions of task 3. self.assertEqual(states.SUCCESS, wf1_t3_ex.state) wf1_t3_action_exs = db_api.get_action_executions( task_execution_id=wf1_t3_ex.id ) self.assertEqual(1, len(wf1_t3_action_exs)) self.assertEqual(states.SUCCESS, wf1_t3_action_exs[0].state) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_cancelled_subflow(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.echo output="Task 1" on-success: - t2 t2: workflow: wf2 on-success: - t3 t3: action: std.echo output="Task 3" wf2: type: direct output: result: <% task(wf2_t1).result %> tasks: wf2_t1: action: std.async_noop """ wb_service.create_workbook_v2(wb_def) wf1_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_state(wf1_ex.id, states.RUNNING) with db_api.transaction(): # Wait for task 1 to complete. wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) self.await_task_success(wf1_t1_ex.id) with db_api.transaction(): # Wait for the async task to run. wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t2_ex = self._assert_single_item( wf1_ex.task_executions, name='t2' ) self.await_task_state(wf1_t2_ex.id, states.RUNNING) with db_api.transaction(): sub_wf_exs = db_api.get_workflow_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(1, len(sub_wf_exs)) wf2_ex_running = self._assert_single_item( sub_wf_exs, state=states.RUNNING ) wf2_t1_ex = self._assert_single_item( wf2_ex_running.task_executions, name='wf2_t1' ) self.await_task_state(wf2_t1_ex.id, states.RUNNING) wf2_t1_action_exs = db_api.get_action_executions( task_execution_id=wf2_t1_ex.id ) self.assertEqual(1, len(wf2_t1_action_exs)) self.assertEqual(states.RUNNING, wf2_t1_action_exs[0].state) # Cancel subworkflow. self.engine.stop_workflow(wf2_ex_running.id, states.CANCELLED) self.await_workflow_cancelled(wf2_ex_running.id) self.await_workflow_cancelled(wf1_ex.id) # Resume workflow and re-run failed subworkflow task. self.engine.rerun_workflow(wf1_t2_ex.id) with db_api.transaction(): wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t2_ex = self._assert_single_item( wf1_ex.task_executions, name='t2' ) self.await_task_state(wf1_t2_ex.id, states.RUNNING) with db_api.transaction(): sub_wf_exs = db_api.get_workflow_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(2, len(sub_wf_exs)) # Check there is exactly 1 sub-wf in Running and 1 in Cancelled # state. Order doesn't matter. self._assert_single_item(sub_wf_exs, state=states.CANCELLED) wf2_ex_running = self._assert_single_item( sub_wf_exs, state=states.RUNNING ) wf2_t1_ex = self._assert_single_item( wf2_ex_running.task_executions, name='wf2_t1' ) self.await_task_state(wf2_t1_ex.id, states.RUNNING) wf2_t1_action_exs = db_api.get_action_executions( task_execution_id=wf2_t1_ex.id ) self.assertEqual(1, len(wf2_t1_action_exs)) self.assertEqual(states.RUNNING, wf2_t1_action_exs[0].state) # Mark async action execution complete. self.engine.on_action_complete( wf2_t1_action_exs[0].id, ml_actions.Result(data={'foo': 'bar'}) ) # Wait for the workflows to succeed. self.await_workflow_success(wf1_ex.id) self.await_workflow_success(wf2_ex_running.id) sub_wf_exs = db_api.get_workflow_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(2, len(sub_wf_exs)) # Check there is exactly 1 sub-wf in Success and 1 in Cancelled state. # Order doesn't matter. self._assert_single_item(sub_wf_exs, state=states.SUCCESS) self._assert_single_item(sub_wf_exs, state=states.CANCELLED) wf2_t1_action_exs = db_api.get_action_executions( task_execution_id=wf2_t1_ex.id ) self.assertEqual(1, len(wf2_t1_action_exs)) self.assertEqual(states.SUCCESS, wf2_t1_action_exs[0].state) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_cancelled_subflow_task(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.echo output="Task 1" on-success: - t2 t2: workflow: wf2 on-success: - t3 t3: action: std.echo output="Task 3" wf2: type: direct output: result: <% task(wf2_t1).result %> tasks: wf2_t1: action: std.async_noop """ wb_service.create_workbook_v2(wb_def) wf1_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_state(wf1_ex.id, states.RUNNING) with db_api.transaction(): # Wait for task 1 to complete. wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) self.await_task_success(wf1_t1_ex.id) with db_api.transaction(): # Wait for the async task to run. wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t2_ex = self._assert_single_item( wf1_ex.task_executions, name='t2' ) self.await_task_state(wf1_t2_ex.id, states.RUNNING) with db_api.transaction(): sub_wf_exs = db_api.get_workflow_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(1, len(sub_wf_exs)) self.assertEqual(states.RUNNING, sub_wf_exs[0].state) wf2_ex = sub_wf_exs[0] wf2_t1_ex = self._assert_single_item( wf2_ex.task_executions, name='wf2_t1' ) self.await_task_state(wf2_t1_ex.id, states.RUNNING) wf2_t1_action_exs = db_api.get_action_executions( task_execution_id=wf2_t1_ex.id ) self.assertEqual(1, len(wf2_t1_action_exs)) self.assertEqual(states.RUNNING, wf2_t1_action_exs[0].state) # Cancel action execution for task. self.engine.on_action_complete( wf2_t1_action_exs[0].id, ml_actions.Result(cancel=True) ) self.await_workflow_cancelled(wf2_ex.id) self.await_workflow_cancelled(wf1_ex.id) # Resume workflow and re-run failed subworkflow task. self.engine.rerun_workflow(wf2_t1_ex.id) with db_api.transaction(): wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t2_ex = self._assert_single_item( wf1_ex.task_executions, name='t2' ) self.await_task_state(wf1_t2_ex.id, states.RUNNING) with db_api.transaction(): sub_wf_exs = db_api.get_workflow_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(1, len(sub_wf_exs)) self.assertEqual(states.RUNNING, sub_wf_exs[0].state) wf2_ex = sub_wf_exs[0] wf2_t1_ex = self._assert_single_item( wf2_ex.task_executions, name='wf2_t1' ) self.await_task_state(wf2_t1_ex.id, states.RUNNING) wf2_t1_action_exs = db_api.get_action_executions( task_execution_id=wf2_t1_ex.id ) self.assertEqual(2, len(wf2_t1_action_exs)) # Check there is exactly 1 action in Running and 1 in Cancelled state. # Order doesn't matter. self._assert_single_item(wf2_t1_action_exs, state=states.CANCELLED) wf2_t1_aex_running = self._assert_single_item( wf2_t1_action_exs, state=states.RUNNING ) # Mark async action execution complete. self.engine.on_action_complete( wf2_t1_aex_running.id, ml_actions.Result(data={'foo': 'bar'}) ) # Wait for the workflows to succeed. self.await_workflow_success(wf1_ex.id) self.await_workflow_success(wf2_ex.id) sub_wf_exs = db_api.get_workflow_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(1, len(sub_wf_exs)) self.assertEqual(states.SUCCESS, sub_wf_exs[0].state) wf2_t1_action_exs = db_api.get_action_executions( task_execution_id=wf2_t1_ex.id ) self.assertEqual(2, len(wf2_t1_action_exs)) # Check there is exactly 1 action in Success and 1 in Cancelled state. # Order doesn't matter. self._assert_single_item(wf2_t1_action_exs, state=states.SUCCESS) self._assert_single_item(wf2_t1_action_exs, state=states.CANCELLED) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 2' # Mock task2 success. ] ) ) def test_rerun_cancelled_with_items(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: with-items: i in <% list(range(0, 3)) %> action: std.async_noop on-success: - t2 t2: action: std.echo output="Task 2" """ wb_service.create_workbook_v2(wb_def) wf1_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_state(wf1_ex.id, states.RUNNING) with db_api.transaction(): wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(3, len(wf1_t1_action_exs)) self._assert_multiple_items(wf1_t1_action_exs, 3, state=states.RUNNING) # Cancel action execution for tasks. for wf1_t1_action_ex in wf1_t1_action_exs: self.engine.on_action_complete( wf1_t1_action_ex.id, ml_actions.Result(cancel=True) ) self.await_workflow_cancelled(wf1_ex.id) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(3, len(wf1_t1_action_exs)) self._assert_multiple_items( wf1_t1_action_exs, 3, state=states.CANCELLED ) # Resume workflow and re-run failed with items task. self.engine.rerun_workflow(wf1_t1_ex.id, reset=False) with db_api.transaction(): wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) self.await_workflow_state(wf1_ex.id, states.RUNNING) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(6, len(wf1_t1_action_exs)) # Check there is exactly 3 action in Running and 3 in Cancelled state. # Order doesn't matter. self._assert_multiple_items( wf1_t1_action_exs, 3, state=states.CANCELLED ) wf1_t1_aexs_running = self._assert_multiple_items( wf1_t1_action_exs, 3, state=states.RUNNING ) # Mark async action execution complete. for action_ex in wf1_t1_aexs_running: self.engine.on_action_complete( action_ex.id, ml_actions.Result(data={'foo': 'bar'}) ) # Wait for the workflows to succeed. self.await_workflow_success(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(6, len(wf1_t1_action_exs)) # Check there is exactly 3 action in Success and 3 in Cancelled state. # Order doesn't matter. self._assert_multiple_items(wf1_t1_action_exs, 3, state=states.SUCCESS) self._assert_multiple_items( wf1_t1_action_exs, 3, state=states.CANCELLED ) mistral-6.0.0/mistral/tests/unit/engine/test_reverse_workflow.py0000666000175100017510000001166713245513262025313 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK = """ --- version: '2.0' name: my_wb workflows: wf1: type: reverse input: - param1 - param2 tasks: task1: action: std.echo output=<% $.param1 %> publish: result1: <% task(task1).result %> task2: action: std.echo output="<% $.result1 %> & <% $.param2 %>" publish: result2: <% task(task2).result %> requires: [task1] task3: action: std.noop task4: action: std.noop requires: task3 """ class ReverseWorkflowEngineTest(base.EngineTestCase): def setUp(self): super(ReverseWorkflowEngineTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK) def test_start_task1(self): wf_input = {'param1': 'a', 'param2': 'b'} wf_ex = self.engine.start_workflow( 'my_wb.wf1', wf_input=wf_input, task_name='task1' ) # Execution 1. self.assertIsNotNone(wf_ex) self.assertDictEqual(wf_input, wf_ex.input) self.assertDictEqual( {'task_name': 'task1', 'namespace': ''}, wf_ex.params ) # Wait till workflow 'wf1' is completed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self.assertEqual(1, len(db_api.get_task_executions())) task_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertDictEqual({'result1': 'a'}, task_ex.published) def test_start_task2(self): wf_input = {'param1': 'a', 'param2': 'b'} wf_ex = self.engine.start_workflow( 'my_wb.wf1', wf_input=wf_input, task_name='task2' ) # Execution 1. self.assertIsNotNone(wf_ex) self.assertDictEqual(wf_input, wf_ex.input) self.assertDictEqual( {'task_name': 'task2', 'namespace': ''}, wf_ex.params ) # Wait till workflow 'wf1' is completed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) self.assertEqual(2, len(db_api.get_task_executions())) task1_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) self.assertDictEqual({'result1': 'a'}, task1_ex.published) task2_ex = self._assert_single_item( task_execs, name='task2', state=states.SUCCESS ) self.assertDictEqual({'result2': 'a & b'}, task2_ex.published) def test_one_line_requires_syntax(self): wf_input = {'param1': 'a', 'param2': 'b'} wf_ex = self.engine.start_workflow( 'my_wb.wf1', wf_input=wf_input, task_name='task4' ) self.await_workflow_success(wf_ex.id) tasks = db_api.get_task_executions() self.assertEqual(2, len(tasks)) self._assert_single_item(tasks, name='task4', state=states.SUCCESS) self._assert_single_item(tasks, name='task3', state=states.SUCCESS) def test_inconsistent_task_names(self): wf_text = """ version: '2.0' wf: type: reverse tasks: task2: action: std.noop task3: action: std.noop requires: [task1] """ exception = self.assertRaises( exc.InvalidModelException, wf_service.create_workflows, wf_text ) self.assertIn("Task 'task1' not found", str(exception)) mistral-6.0.0/mistral/tests/unit/engine/__init__.py0000666000175100017510000000000013245513261022360 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/engine/test_default_engine.py0000666000175100017510000004664713245513261024664 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import mock from oslo_config import cfg from oslo_messaging.rpc import client as rpc_client from oslo_utils import uuidutils from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral.engine import default_engine as d_eng from mistral import exceptions as exc from mistral.executors import base as exe from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit import base from mistral.tests.unit.engine import base as eng_test_base from mistral.workflow import states from mistral_lib import actions as ml_actions # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK = """ --- version: '2.0' name: wb workflows: wf: type: reverse input: - param1: value1 - param2 tasks: task1: action: std.echo output=<% $.param1 %> publish: var: <% task(task1).result %> task2: action: std.echo output=<% $.param2 %> requires: [task1] """ DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S.%f' ENVIRONMENT = { 'id': uuidutils.generate_uuid(), 'name': 'test', 'description': 'my test settings', 'variables': { 'key1': 'abc', 'key2': 123 }, 'scope': 'private', 'created_at': str(datetime.datetime.utcnow()), 'updated_at': str(datetime.datetime.utcnow()) } ENVIRONMENT_DB = models.Environment( id=ENVIRONMENT['id'], name=ENVIRONMENT['name'], description=ENVIRONMENT['description'], variables=ENVIRONMENT['variables'], scope=ENVIRONMENT['scope'], created_at=datetime.datetime.strptime(ENVIRONMENT['created_at'], DATETIME_FORMAT), updated_at=datetime.datetime.strptime(ENVIRONMENT['updated_at'], DATETIME_FORMAT) ) MOCK_ENVIRONMENT = mock.MagicMock(return_value=ENVIRONMENT_DB) MOCK_NOT_FOUND = mock.MagicMock(side_effect=exc.DBEntityNotFoundError()) @mock.patch.object(exe, 'get_executor', mock.Mock()) class DefaultEngineTest(base.DbTestCase): def setUp(self): super(DefaultEngineTest, self).setUp() wb_service.create_workbook_v2(WORKBOOK) # Note: For purposes of this test we can easily use # simple magic mocks for engine and executor clients self.engine = d_eng.DefaultEngine() def test_start_workflow(self): wf_input = {'param1': 'Hey', 'param2': 'Hi'} # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input=wf_input, description='my execution', task_name='task2' ) self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual('my execution', wf_ex.description) self.assertIn('__execution', wf_ex.context) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task_ex = task_execs[0] self.assertEqual('wb.wf', task_ex.workflow_name) self.assertEqual('task1', task_ex.name) self.assertEqual(states.RUNNING, task_ex.state) self.assertIsNotNone(task_ex.spec) self.assertDictEqual({}, task_ex.runtime_context) # Data Flow properties. action_execs = db_api.get_action_executions( task_execution_id=task_ex.id ) self.assertEqual(1, len(action_execs)) task_action_ex = action_execs[0] self.assertIsNotNone(task_action_ex) self.assertDictEqual({'output': 'Hey'}, task_action_ex.input) def test_start_workflow_with_ex_id(self): wf_input = {'param1': 'Hey1', 'param2': 'Hi1'} the_ex_id = 'theId' # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input=wf_input, description='my execution', task_name='task2', wf_ex_id=the_ex_id ) self.assertEqual(the_ex_id, wf_ex.id) wf_ex_2 = self.engine.start_workflow( 'wb.wf', wf_input={'param1': 'Hey2', 'param2': 'Hi2'}, wf_ex_id=the_ex_id ) self.assertDictEqual(dict(wf_ex), dict(wf_ex_2)) wf_executions = db_api.get_workflow_executions() self.assertEqual(1, len(wf_executions)) def test_start_workflow_with_input_default(self): wf_input = {'param2': 'value2'} # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input=wf_input, task_name='task1' ) self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIn('__execution', wf_ex.context) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task_ex = task_execs[0] self.assertEqual('wb.wf', task_ex.workflow_name) self.assertEqual('task1', task_ex.name) self.assertEqual(states.RUNNING, task_ex.state) self.assertIsNotNone(task_ex.spec) self.assertDictEqual({}, task_ex.runtime_context) # Data Flow properties. action_execs = db_api.get_action_executions( task_execution_id=task_ex.id ) self.assertEqual(1, len(action_execs)) task_action_ex = action_execs[0] self.assertIsNotNone(task_action_ex) self.assertDictEqual({'output': 'value1'}, task_action_ex.input) def test_start_workflow_with_adhoc_env(self): wf_input = { 'param1': '<% env().key1 %>', 'param2': '<% env().key2 %>' } env = ENVIRONMENT['variables'] # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input=wf_input, env=env, task_name='task2') self.assertIsNotNone(wf_ex) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual(wf_ex.params.get('env', {}), env) @mock.patch.object(db_api, "load_environment", MOCK_ENVIRONMENT) def test_start_workflow_with_saved_env(self): wf_input = { 'param1': '<% env().key1 %>', 'param2': '<% env().key2 %>' } env = ENVIRONMENT['variables'] # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input=wf_input, env='test', task_name='task2' ) self.assertIsNotNone(wf_ex) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual(wf_ex.params.get('env', {}), env) @mock.patch.object(db_api, "get_environment", MOCK_NOT_FOUND) def test_start_workflow_env_not_found(self): e = self.assertRaises( exc.InputException, self.engine.start_workflow, 'wb.wf', wf_input={ 'param1': '<% env().key1 %>', 'param2': 'some value' }, env='foo', task_name='task2' ) self.assertEqual("Environment is not found: foo", str(e)) def test_start_workflow_with_env_type_error(self): e = self.assertRaises( exc.InputException, self.engine.start_workflow, 'wb.wf', wf_input={ 'param1': '<% env().key1 %>', 'param2': 'some value' }, env=True, task_name='task2' ) self.assertIn('Unexpected value type for environment', str(e)) def test_start_workflow_missing_parameters(self): e = self.assertRaises( exc.InputException, self.engine.start_workflow, 'wb.wf', '', None, task_name='task2' ) self.assertIn("Invalid input", str(e)) self.assertIn("missing=['param2']", str(e)) def test_start_workflow_unexpected_parameters(self): e = self.assertRaises( exc.InputException, self.engine.start_workflow, 'wb.wf', wf_input={ 'param1': 'Hey', 'param2': 'Hi', 'unexpected_param': 'val' }, task_name='task2' ) self.assertIn("Invalid input", str(e)) self.assertIn("unexpected=['unexpected_param']", str(e)) def test_on_action_update(self): workflow = """ version: '2.0' wf_async: type: direct tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop """ # Start workflow. wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf_async') self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex.state) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task1_ex = task_execs[0] self.assertEqual('task1', task1_ex.name) self.assertEqual(states.RUNNING, task1_ex.state) action_execs = db_api.get_action_executions( task_execution_id=task1_ex.id ) self.assertEqual(1, len(action_execs)) task1_action_ex = action_execs[0] self.assertEqual(states.RUNNING, task1_action_ex.state) # Pause action execution of 'task1'. task1_action_ex = self.engine.on_action_update( task1_action_ex.id, states.PAUSED ) self.assertIsInstance(task1_action_ex, models.ActionExecution) self.assertEqual(states.PAUSED, task1_action_ex.state) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self.assertEqual(states.PAUSED, task_execs[0].state) self.assertEqual(states.PAUSED, wf_ex.state) action_execs = db_api.get_action_executions( task_execution_id=task1_ex.id ) self.assertEqual(1, len(action_execs)) task1_action_ex = action_execs[0] self.assertEqual(states.PAUSED, task1_action_ex.state) def test_on_action_update_non_async(self): workflow = """ version: '2.0' wf_sync: type: direct tasks: task1: action: std.noop on-success: - task2 task2: action: std.noop """ # Start workflow. wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf_sync') self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex.state) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task1_ex = task_execs[0] self.assertEqual('task1', task1_ex.name) self.assertEqual(states.RUNNING, task1_ex.state) action_execs = db_api.get_action_executions( task_execution_id=task1_ex.id ) self.assertEqual(1, len(action_execs)) task1_action_ex = action_execs[0] self.assertEqual(states.RUNNING, task1_action_ex.state) self.assertRaises( exc.InvalidStateTransitionException, self.engine.on_action_update, task1_action_ex.id, states.PAUSED ) def test_on_action_complete(self): wf_input = {'param1': 'Hey', 'param2': 'Hi'} # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input=wf_input, task_name='task2' ) self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex.state) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task1_ex = task_execs[0] self.assertEqual('task1', task1_ex.name) self.assertEqual(states.RUNNING, task1_ex.state) self.assertIsNotNone(task1_ex.spec) self.assertDictEqual({}, task1_ex.runtime_context) self.assertNotIn('__execution', task1_ex.in_context) action_execs = db_api.get_action_executions( task_execution_id=task1_ex.id ) self.assertEqual(1, len(action_execs)) task1_action_ex = action_execs[0] self.assertIsNotNone(task1_action_ex) self.assertDictEqual({'output': 'Hey'}, task1_action_ex.input) # Finish action of 'task1'. task1_action_ex = self.engine.on_action_complete( task1_action_ex.id, ml_actions.Result(data='Hey') ) self.assertIsInstance(task1_action_ex, models.ActionExecution) self.assertEqual('std.echo', task1_action_ex.name) self.assertEqual(states.SUCCESS, task1_action_ex.state) # Data Flow properties. task1_ex = db_api.get_task_execution(task1_ex.id) # Re-read the state. self.assertDictEqual({'var': 'Hey'}, task1_ex.published) self.assertDictEqual({'output': 'Hey'}, task1_action_ex.input) self.assertDictEqual({'result': 'Hey'}, task1_action_ex.output) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex.state) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) task2_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(states.RUNNING, task2_ex.state) action_execs = db_api.get_action_executions( task_execution_id=task2_ex.id ) self.assertEqual(1, len(action_execs)) task2_action_ex = action_execs[0] self.assertIsNotNone(task2_action_ex) self.assertDictEqual({'output': 'Hi'}, task2_action_ex.input) # Finish 'task2'. task2_action_ex = self.engine.on_action_complete( task2_action_ex.id, ml_actions.Result(data='Hi') ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertIsNotNone(wf_ex) task_execs = wf_ex.task_executions # Workflow completion check is done separate with scheduler # but scheduler doesn't start in this test (in fact, it's just # a DB test)so the workflow is expected to be in running state. self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsInstance(task2_action_ex, models.ActionExecution) self.assertEqual('std.echo', task2_action_ex.name) self.assertEqual(states.SUCCESS, task2_action_ex.state) # Data Flow properties. self.assertDictEqual({'output': 'Hi'}, task2_action_ex.input) self.assertDictEqual({}, task2_ex.published) self.assertDictEqual({'output': 'Hi'}, task2_action_ex.input) self.assertDictEqual({'result': 'Hi'}, task2_action_ex.output) self.assertEqual(2, len(task_execs)) self._assert_single_item(task_execs, name='task1') self._assert_single_item(task_execs, name='task2') def test_stop_workflow_fail(self): # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input={ 'param1': 'Hey', 'param2': 'Hi' }, task_name="task2" ) # Re-read execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.engine.stop_workflow(wf_ex.id, 'ERROR', "Stop this!") # Re-read from DB again wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual('ERROR', wf_ex.state) self.assertEqual("Stop this!", wf_ex.state_info) def test_stop_workflow_succeed(self): # Start workflow. wf_ex = self.engine.start_workflow( 'wb.wf', wf_input={ 'param1': 'Hey', 'param2': 'Hi' }, task_name="task2" ) # Re-read execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.engine.stop_workflow(wf_ex.id, 'SUCCESS', "Like this, done") # Re-read from DB again wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual('SUCCESS', wf_ex.state) self.assertEqual("Like this, done", wf_ex.state_info) def test_stop_workflow_bad_status(self): wf_ex = self.engine.start_workflow( 'wb.wf', wf_input={ 'param1': 'Hey', 'param2': 'Hi' }, task_name="task2" ) # Re-read execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertNotEqual( 'PAUSE', self.engine.stop_workflow(wf_ex.id, 'PAUSE') ) def test_resume_workflow(self): # TODO(akhmerov): Implement. pass class DefaultEngineWithTransportTest(eng_test_base.EngineTestCase): def test_engine_client_remote_error(self): mocked = mock.Mock() mocked.sync_call.side_effect = rpc_client.RemoteError( 'InputException', 'Input is wrong' ) self.engine_client._client = mocked self.assertRaises( exc.InputException, self.engine_client.start_workflow, 'some_wf', {}, 'some_description' ) def test_engine_client_remote_error_arbitrary(self): mocked = mock.Mock() mocked.sync_call.side_effect = KeyError('wrong key') self.engine_client._client = mocked exception = self.assertRaises( exc.MistralException, self.engine_client.start_workflow, 'some_wf', {}, 'some_description' ) self.assertIn('KeyError: wrong key', str(exception)) mistral-6.0.0/mistral/tests/unit/engine/test_error_result.py0000666000175100017510000001234113245513262024423 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit import base as test_base from mistral.tests.unit.engine import base from mistral.workflow import data_flow from mistral.workflow import states from mistral_lib import actions as actions_base # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WF = """ --- version: '2.0' wf: input: - success_result - error_result tasks: task1: action: my_action input: success_result: <% $.success_result %> error_result: <% $.error_result %> publish: p_var: <% task(task1).result %> on-error: - task2: <% task(task1).result = 2 %> - task3: <% task(task1).result = 3 %> task2: action: std.noop task3: action: std.noop """ class MyAction(actions_base.Action): def __init__(self, success_result, error_result): self.success_result = success_result self.error_result = error_result def run(self, context): return actions_base.Result( data=self.success_result, error=self.error_result ) def test(self): raise NotImplementedError class ErrorResultTest(base.EngineTestCase): def setUp(self): super(ErrorResultTest, self).setUp() test_base.register_action_class('my_action', MyAction) def test_error_result1(self): wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow( 'wf', wf_input={ 'success_result': None, 'error_result': 2 } ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(2, len(tasks)) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertEqual(states.ERROR, task1.state) self.assertEqual(states.SUCCESS, task2.state) # "publish" clause is ignored in case of ERROR so task execution # field must be empty. self.assertDictEqual({}, task1.published) self.assertEqual(2, data_flow.get_task_execution_result(task1)) def test_error_result2(self): wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow( 'wf', wf_input={ 'success_result': None, 'error_result': 3 } ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(2, len(tasks)) task1 = self._assert_single_item(tasks, name='task1') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(states.ERROR, task1.state) self.assertEqual(states.SUCCESS, task3.state) # "publish" clause is ignored in case of ERROR so task execution # field must be empty. self.assertDictEqual({}, task1.published) self.assertEqual(3, data_flow.get_task_execution_result(task1)) def test_success_result(self): wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow( 'wf', wf_input={ 'success_result': 'success', 'error_result': None } ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(1, len(tasks)) task1 = self._assert_single_item(tasks, name='task1') self.assertEqual(states.SUCCESS, task1.state) # "publish" clause is ignored in case of ERROR so task execution # field must be empty. self.assertDictEqual({'p_var': 'success'}, task1.published) self.assertEqual( 'success', data_flow.get_task_execution_result(task1) ) mistral-6.0.0/mistral/tests/unit/engine/test_direct_workflow.py0000666000175100017510000005017713245513261025110 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as ml_actions # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class DirectWorkflowEngineTest(base.EngineTestCase): def _run_workflow(self, wf_text, expected_state=states.ERROR): wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_state(wf_ex.id, expected_state) return db_api.get_workflow_execution(wf_ex.id) def test_on_closures(self): wf_text = """ version: '2.0' wf: # type: direct - 'direct' is default tasks: task1: description: | Explicit 'succeed' command should lead to workflow success. action: std.echo output="Echo" on-success: - task2 - succeed on-complete: - task3 - task4 - fail - never_gets_here task2: action: std.noop task3: action: std.noop task4: action: std.noop never_gets_here: action: std.noop """ wf_ex = self._run_workflow(wf_text, expected_state=states.SUCCESS) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') self.assertEqual(2, len(tasks)) self.await_task_success(task1.id) self.await_task_success(task2.id) self.assertTrue(wf_ex.state, states.ERROR) def test_condition_transition_not_triggering(self): wf_text = """--- version: '2.0' wf: input: - var: null tasks: task1: action: std.fail on-success: - task2 on-error: - task3: <% $.var != null %> task2: action: std.noop task3: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') self.assertEqual(1, len(tasks)) self.await_task_error(task1.id) self.assertTrue(wf_ex.state, states.ERROR) def test_change_state_after_success(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output="Echo" on-success: - task2 task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) self.assertEqual( states.SUCCESS, self.engine.resume_workflow(wf_ex.id).state ) self.assertRaises( exc.WorkflowException, self.engine.pause_workflow, wf_ex.id ) self.assertEqual( states.SUCCESS, self.engine.stop_workflow(wf_ex.id, states.ERROR).state ) def test_task_not_updated(self): wf_text = """ version: 2.0 wf: tasks: task1: action: std.echo input: output: <% task().result.content %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) self.assertEqual( states.SUCCESS, self.engine.resume_workflow(wf_ex.id).state ) self.assertRaises( exc.WorkflowException, self.engine.pause_workflow, wf_ex.id ) self.assertEqual( states.SUCCESS, self.engine.stop_workflow(wf_ex.id, states.ERROR).state ) def test_wrong_task_input(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: description: Wrong task output should lead to workflow failure action: std.echo wrong_input="Hahaha" """ wf_ex = self._run_workflow(wf_text) self.assertIn('Invalid input', wf_ex.state_info) self.assertTrue(wf_ex.state, states.ERROR) def test_wrong_first_task_input(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo wrong_input="Ha-ha" """ wf_ex = self._run_workflow(wf_text) self.assertIn("Invalid input", wf_ex.state_info) self.assertEqual(states.ERROR, wf_ex.state) def test_wrong_action(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: action.doesnt_exist """ wf_ex = self._run_workflow(wf_text) # TODO(dzimine): Catch tasks caused error, and set them to ERROR: # TODO(dzimine): self.assertTrue(task_ex.state, states.ERROR) self.assertTrue(wf_ex.state, states.ERROR) self.assertIn("Failed to find action", wf_ex.state_info) def test_wrong_action_first_task(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: wrong.task """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.assertIn( "Failed to find action [action_name=wrong.task]", wf_ex.state_info ) self.assertEqual(states.ERROR, wf_ex.state) def test_next_task_with_input_yaql_error(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output=<% wrong(yaql) %> """ # Invoke workflow and assert workflow is in ERROR. wf_ex = self._run_workflow(wf_text) self.assertEqual(states.ERROR, wf_ex.state) self.assertIn('Can not evaluate YAQL expression', wf_ex.state_info) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) # 'task1' should be in SUCCESS. task_1_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) # 'task1' should have exactly one action execution (in SUCCESS). task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # 'task2' should exist but in ERROR. task_2_ex = self._assert_single_item( task_execs, name='task2', state=states.ERROR ) # 'task2' must not have action executions. self.assertEqual( 0, len(db_api.get_action_executions(task_execution_id=task_2_ex.id)) ) def test_async_next_task_with_input_yaql_error(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.async_noop on-complete: - task2 task2: action: std.echo output=<% wrong(yaql) %> """ # Invoke workflow and assert workflow, task, # and async action execution are RUNNING. wf_ex = self._run_workflow(wf_text, states.RUNNING) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.RUNNING, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) # Update async action execution result. self.engine.on_action_complete( task_1_action_exs[0].id, ml_actions.Result(data='foobar') ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIn('Can not evaluate YAQL expression', wf_ex.state_info) self.assertEqual(2, len(task_execs)) # 'task1' must be in SUCCESS. task_1_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) # 'task1' must have exactly one action execution (in SUCCESS). task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # 'task2' must be in ERROR. task_2_ex = self._assert_single_item( task_execs, name='task2', state=states.ERROR ) # 'task2' must not have action executions. self.assertEqual( 0, len(db_api.get_action_executions(task_execution_id=task_2_ex.id)) ) def test_messed_yaql_in_first_task(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output=<% wrong(yaql) %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.assertIn( "Can not evaluate YAQL expression [expression=wrong(yaql)", wf_ex.state_info ) self.assertEqual(states.ERROR, wf_ex.state) def test_mismatched_yaql_in_first_task(self): wf_text = """ version: '2.0' wf: input: - var tasks: task1: action: std.echo output=<% $.var + $.var2 %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf', wf_input={'var': 2}) self.assertIn("Can not evaluate YAQL expression", wf_ex.state_info) self.assertEqual(states.ERROR, wf_ex.state) def test_one_line_syntax_in_on_clauses(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output=1 on-success: task2 task2: action: std.echo output=1 on-complete: task3 task3: action: std.fail on-error: task4 task4: action: std.echo output=4 """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) def test_task_on_clause_has_yaql_error(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.noop on-success: - task2: <% wrong(yaql) %> task2: action: std.noop """ # Invoke workflow and assert workflow is in ERROR. wf_ex = self._run_workflow(wf_text) self.assertEqual(states.ERROR, wf_ex.state) self.assertIn('Can not evaluate YAQL expression', wf_ex.state_info) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions # Assert that there is only one task execution and it's SUCCESS. self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item( task_execs, name='task1' ) self.assertEqual(states.ERROR, task_1_ex.state) # Assert that there is only one action execution and it's SUCCESS. task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) def test_async_task_on_clause_has_yaql_error(self): wf_text = """ version: '2.0' wf: type: direct tasks: task1: action: std.async_noop on-complete: - task2: <% wrong(yaql) %> task2: action: std.noop """ # Invoke workflow and assert workflow, task, # and async action execution are RUNNING. wf_ex = self._run_workflow(wf_text, states.RUNNING) self.assertEqual(states.RUNNING, wf_ex.state) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.RUNNING, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) # Update async action execution result. self.engine.on_action_complete( task_1_action_exs[0].id, ml_actions.Result(data='foobar') ) # Assert that task1 is SUCCESS and workflow is ERROR. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIn('Can not evaluate YAQL expression', wf_ex.state_info) self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.ERROR, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) def test_inconsistent_task_names(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop on-success: task3 task2: action: std.noop """ exception = self.assertRaises( exc.InvalidModelException, wf_service.create_workflows, wf_text ) self.assertIn("Task 'task3' not found", str(exception)) def test_delete_workflow_completion_check_on_stop(self): wf_text = """--- version: '2.0' wf: tasks: async_task: action: std.async_noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') calls = db_api.get_delayed_calls() mtd_name = 'mistral.engine.workflow_handler._check_and_complete' self._assert_single_item(calls, target_method_name=mtd_name) self.engine.stop_workflow(wf_ex.id, state=states.CANCELLED) self._await( lambda: len(db_api.get_delayed_calls(target_method_name=mtd_name)) == 0 ) def test_delete_workflow_completion_on_execution_delete(self): wf_text = """--- version: '2.0' wf: tasks: async_task: action: std.async_noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') calls = db_api.get_delayed_calls() mtd_name = 'mistral.engine.workflow_handler._check_and_complete' self._assert_single_item(calls, target_method_name=mtd_name) db_api.delete_workflow_execution(wf_ex.id) self._await( lambda: len(db_api.get_delayed_calls(target_method_name=mtd_name)) == 0 ) def test_output(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output="Hi Mistral!" on-success: task2 task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual({}, wf_ex.output) def test_triggered_by(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop on-success: task2 task2: action: std.fail on-error: task3 task3: action: std.fail on-error: noop on-success: task4 on-complete: task4 task4: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task1 = self._assert_single_item(task_execs, name='task1') task2 = self._assert_single_item(task_execs, name='task2') task3 = self._assert_single_item(task_execs, name='task3') task4 = self._assert_single_item(task_execs, name='task4') key = 'triggered_by' self.assertIsNone(task1.runtime_context.get(key)) self.assertListEqual( [ { "task_id": task1.id, "event": "on-success" } ], task2.runtime_context.get(key) ) self.assertListEqual( [ { "task_id": task2.id, "event": "on-error" } ], task3.runtime_context.get(key) ) self.assertListEqual( [ { "task_id": task3.id, "event": "on-complete" } ], task4.runtime_context.get(key) ) mistral-6.0.0/mistral/tests/unit/engine/test_direct_workflow_with_cycles.py0000666000175100017510000001522013245513262027474 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import data_flow from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class DirectWorkflowWithCyclesTest(base.EngineTestCase): def test_simple_cycle(self): wf_text = """ version: '2.0' wf: vars: cnt: 0 output: cnt: <% $.cnt %> tasks: task1: on-complete: - task2 task2: action: std.echo output=2 publish: cnt: <% $.cnt + 1 %> on-success: - task3 task3: action: std.echo output=3 on-success: - task2: <% $.cnt < 2 %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual({'cnt': 2}, wf_ex.output) t_execs = wf_ex.task_executions # Expecting one execution for task1 and two executions # for task2 and task3 because of the cycle 'task2 <-> task3'. self._assert_single_item(t_execs, name='task1') self._assert_multiple_items(t_execs, 2, name='task2') self._assert_multiple_items(t_execs, 2, name='task3') self.assertEqual(5, len(t_execs)) self.assertEqual(states.SUCCESS, wf_ex.state) self.assertTrue(all(states.SUCCESS == t_ex.state for t_ex in t_execs)) def test_complex_cycle(self): wf_text = """ version: '2.0' wf: vars: cnt: 0 output: cnt: <% $.cnt %> tasks: task1: on-complete: - task2 task2: action: std.echo output=2 publish: cnt: <% $.cnt + 1 %> on-success: - task3 task3: action: std.echo output=3 on-complete: - task4 task4: action: std.echo output=4 on-success: - task2: <% $.cnt < 2 %> - task5: <% $.cnt >= 2 %> task5: action: std.echo output=<% $.cnt %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual({'cnt': 2}, wf_ex.output) t_execs = wf_ex.task_executions # Expecting one execution for task1 and task5 and two executions # for task2, task3 and task4 because of the cycle # 'task2 -> task3 -> task4 -> task2'. self._assert_single_item(t_execs, name='task1') self._assert_multiple_items(t_execs, 2, name='task2') self._assert_multiple_items(t_execs, 2, name='task3') self._assert_multiple_items(t_execs, 2, name='task4') task5_ex = self._assert_single_item(t_execs, name='task5') self.assertEqual(8, len(t_execs)) self.assertEqual(states.SUCCESS, wf_ex.state) self.assertTrue(all(states.SUCCESS == t_ex.state for t_ex in t_execs)) with db_api.transaction(): task5_ex = db_api.get_task_execution(task5_ex.id) self.assertEqual(2, data_flow.get_task_execution_result(task5_ex)) def test_parallel_cycles(self): wf_text = """ version: '2.0' wf: vars: cnt: 0 output: cnt: <% $.cnt %> tasks: task1: on-complete: - task1_2 - task2_2 task1_2: action: std.echo output=2 publish: cnt: <% $.cnt + 1 %> on-success: - task1_3 task1_3: action: std.echo output=3 on-success: - task1_2: <% $.cnt < 2 %> task2_2: action: std.echo output=2 publish: cnt: <% $.cnt + 1 %> on-success: - task2_3 task2_3: action: std.echo output=3 on-success: - task2_2: <% $.cnt < 3 %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output t_execs = wf_ex.task_executions # NOTE: We have two cycles in parallel workflow branches # and those branches will have their own copy of "cnt" variable # so both cycles must complete correctly. self._assert_single_item(t_execs, name='task1') self._assert_multiple_items(t_execs, 2, name='task1_2') self._assert_multiple_items(t_execs, 2, name='task1_3') self._assert_multiple_items(t_execs, 3, name='task2_2') self._assert_multiple_items(t_execs, 3, name='task2_3') self.assertEqual(11, len(t_execs)) self.assertEqual(states.SUCCESS, wf_ex.state) self.assertTrue(all(states.SUCCESS == t_ex.state for t_ex in t_execs)) # TODO(rakhmerov): We have this uncertainty because of the known # bug: https://bugs.launchpad.net/mistral/liberty/+bug/1424461 # Now workflow output is almost always 3 because the second cycle # takes longer hence it wins because of how DB queries work: they # order entities in ascending of creation time. self.assertTrue(wf_output['cnt'] == 2 or wf_output['cnt'] == 3) mistral-6.0.0/mistral/tests/unit/engine/test_integrity_check.py0000666000175100017510000000532213245513262025050 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states class IntegrityCheckTest(base.EngineTestCase): def setUp(self): super(IntegrityCheckTest, self).setUp() self.override_config('auth_enable', False, group='pecan') self.override_config( 'execution_integrity_check_delay', 2, group='engine' ) def test_task_execution_integrity(self): # The idea of the test is that we use the no-op asynchronous action # so that action and task execution state is not automatically set # to SUCCESS after we start the workflow. We'll update the action # execution state to SUCCESS directly through the DB and will wait # till task execution integrity is checked and fixed automatically # by a periodic job after about 2 seconds. wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop on-success: task2 task2: action: std.async_noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) self.await_task_success(task1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task2_ex = self._assert_single_item( wf_ex.task_executions, name='task2', state=states.RUNNING ) action2_ex = self._assert_single_item( task2_ex.executions, state=states.RUNNING ) db_api.update_action_execution( action2_ex.id, {'state': states.SUCCESS} ) self.await_task_success(task2_ex.id) self.await_workflow_success(wf_ex.id) mistral-6.0.0/mistral/tests/unit/engine/test_set_state.py0000666000175100017510000000362013245513262023667 0ustar zuulzuul00000000000000# Copyright 2017 - Nokia Networks # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.engine import workflows from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class TestSetState(base.EngineTestCase): def test_set_state(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output="Echo" on-success: - task2 task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) # The state in db is SUCCESS, but wf_ex still contains outdated info. self.assertEqual("RUNNING", wf_ex.state) wf = workflows.Workflow(wf_ex) # Trying to change the status of succeed execution. There is no error, # only warning message that state has been changed in db. wf.set_state("ERROR") with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual("SUCCESS", wf_ex.state) mistral-6.0.0/mistral/tests/unit/engine/test_task_cancel.py0000666000175100017510000002467713245513262024162 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock import testtools from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as ml_actions class TaskCancelTest(base.EngineTestCase): def test_cancel_action_execution(self): workflow = """ version: '2.0' wf: tasks: task1: action: std.async_noop on-success: - task2 on-error: - task3 on-complete: - task4 task2: action: std.noop task3: action: std.noop task4: action: std.noop """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.await_workflow_state(wf_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wf') task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) self.engine.on_action_complete( task_1_action_exs[0].id, ml_actions.Result(cancel=True) ) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) self.await_task_cancelled(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled tasks: task1", wf_ex.state_info) self.assertEqual(1, len(task_execs)) self.assertEqual(states.CANCELLED, task_1_ex.state) self.assertIsNone(task_1_ex.state_info) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.CANCELLED, task_1_action_exs[0].state) self.assertIsNone(task_1_action_exs[0].state_info) def test_cancel_child_workflow_action_execution(self): workbook = """ version: '2.0' name: wb workflows: wf: tasks: taskx: workflow: subwf subwf: tasks: task1: action: std.async_noop on-success: - task2 on-error: - task3 on-complete: - task4 task2: action: std.noop task3: action: std.noop task4: action: std.noop """ wb_service.create_workbook_v2(workbook) wf_ex = self.engine.start_workflow('wb.wf') self.await_workflow_state(wf_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_ex = self._assert_single_item(wf_execs, name='wb.subwf') task_1_ex = self._assert_single_item( subwf_ex.task_executions, name='task1' ) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) self.engine.on_action_complete( task_1_action_exs[0].id, ml_actions.Result(cancel=True) ) self.await_workflow_cancelled(subwf_ex.id) self.await_task_cancelled(task_ex.id) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_ex = self._assert_single_item(wf_execs, name='wb.subwf') subwf_task_execs = subwf_ex.task_executions self.assertEqual(states.CANCELLED, subwf_ex.state) self.assertEqual("Cancelled tasks: task1", subwf_ex.state_info) self.assertEqual(1, len(subwf_task_execs)) self.assertEqual(states.CANCELLED, task_ex.state) self.assertEqual("Cancelled tasks: task1", task_ex.state_info) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled tasks: taskx", wf_ex.state_info) def test_cancel_action_execution_with_task_retry(self): workflow = """ version: '2.0' wf: tasks: task1: action: std.async_noop retry: count: 3 delay: 0 on-success: - task2 task2: action: std.noop """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.await_workflow_state(wf_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wf') task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) self.engine.on_action_complete( task_1_action_exs[0].id, ml_actions.Result(cancel=True) ) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) self.await_task_cancelled(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled tasks: task1", wf_ex.state_info) self.assertEqual(1, len(task_execs)) self.assertEqual(states.CANCELLED, task_1_ex.state) self.assertIsNone(task_1_ex.state_info) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.CANCELLED, task_1_action_exs[0].state) self.assertIsNone(task_1_action_exs[0].state_info) @testtools.skip('Restore concurrency support.') @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 2' # Mock task2 success. ] ) ) def test_cancel_with_items_concurrency(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: tasks: t1: with-items: i in <% list(range(0, 4)) %> action: std.async_noop concurrency: 2 on-success: - t2 t2: action: std.echo output="Task 2" """ wb_service.create_workbook_v2(wb_def) wf1_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_state(wf1_ex.id, states.RUNNING) with db_api.transaction(): wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(2, len(wf1_t1_action_exs)) self.assertEqual(states.RUNNING, wf1_t1_action_exs[0].state) self.assertEqual(states.RUNNING, wf1_t1_action_exs[1].state) # Cancel action execution for task. for wf1_t1_action_ex in wf1_t1_action_exs: self.engine.on_action_complete( wf1_t1_action_ex.id, ml_actions.Result(cancel=True) ) self.await_task_cancelled(wf1_t1_ex.id) self.await_workflow_cancelled(wf1_ex.id) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(2, len(wf1_t1_action_exs)) self.assertEqual(states.CANCELLED, wf1_t1_action_exs[0].state) self.assertEqual(states.CANCELLED, wf1_t1_action_exs[1].state) mistral-6.0.0/mistral/tests/unit/engine/test_lookup_utils.py0000666000175100017510000000534413245513272024433 0ustar zuulzuul00000000000000# Copyright 2017 - Nokia Networks. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import lookup_utils from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class LookupUtilsTest(base.EngineTestCase): def test_task_execution_cache_invalidation(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop on-success: join_task task2: action: std.noop on-success: join_task join_task: join: all on-success: task4 task4: action: std.noop pause-before: true """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_paused(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(4, len(tasks)) self._assert_single_item(tasks, name='task1', state=states.SUCCESS) self._assert_single_item(tasks, name='task2', state=states.SUCCESS) self._assert_single_item(tasks, name='join_task', state=states.SUCCESS) self._assert_single_item(tasks, name='task4', state=states.IDLE) # Expecting one cache entry because we know that 'join' operation # uses cached lookups and the workflow is not finished yet. self.assertEqual(1, lookup_utils.get_task_execution_cache_size()) self.engine.resume_workflow(wf_ex.id) self.await_workflow_success(wf_ex.id) # Expecting that the cache size is 0 because the workflow has # finished and invalidated corresponding cache entry. self.assertEqual(0, lookup_utils.get_task_execution_cache_size()) mistral-6.0.0/mistral/tests/unit/engine/test_with_items.py0000666000175100017510000011157513245513262024061 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import mock from oslo_config import cfg from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit import base as test_base from mistral.tests.unit.engine import base from mistral import utils from mistral.workflow import data_flow from mistral.workflow import states from mistral_lib import actions as actions_base # TODO(nmakhotkin) Need to write more tests. # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WB = """ --- version: "2.0" name: wb workflows: wf: type: direct input: - names_info tasks: task1: with-items: name_info in <% $.names_info %> action: std.echo output=<% $.name_info.name %> publish: result: <% task(task1).result[0] %> """ WB_WITH_STATIC_VAR = """ --- version: "2.0" name: wb workflows: wf: type: direct input: - names_info - greeting tasks: task1: with-items: name_info in <% $.names_info %> action: std.echo output="<% $.greeting %>, <% $.name_info.name %>!" publish: result: <% task(task1).result %> """ WB_MULTI_ARRAY = """ --- version: "2.0" name: wb workflows: wf: type: direct input: - arrayI - arrayJ tasks: task1: with-items: - itemX in <% $.arrayI %> - itemY in <% $.arrayJ %> action: std.echo output="<% $.itemX %> <% $.itemY %>" publish: result: <% task(task1).result %> """ WB_ACTION_CONTEXT = """ --- version: "2.0" name: wb workflows: wf: type: direct input: - links tasks: task1: with-items: link in <% $.links %> action: std.http url=<% $.link %> publish: result: <% task(task1) %> """ WF_INPUT = { 'names_info': [ {'name': 'John'}, {'name': 'Ivan'}, {'name': 'Mistral'} ] } WF_INPUT_URLS = { 'links': [ 'http://google.com', 'http://openstack.org', 'http://google.com' ] } WF_INPUT_ONE_ITEM = { 'names_info': [ {'name': 'Guy'} ] } class RandomSleepEchoAction(actions_base.Action): def __init__(self, output): self.output = output def run(self, context): utils.random_sleep(1) return self.output def test(self): utils.random_sleep(1) class WithItemsEngineTest(base.EngineTestCase): def _assert_capacity(self, capacity, task_ex): self.assertEqual( capacity, task_ex.runtime_context['with_items']['capacity'] ) @staticmethod def _get_incomplete_action(task_ex): return [e for e in task_ex.executions if not e.accepted][0] @staticmethod def _get_running_actions_count(task_ex): return len( [e for e in task_ex.executions if e.state == states.RUNNING] ) def test_with_items_simple(self): wb_service.create_workbook_v2(WB) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf', wf_input=WF_INPUT) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task1_ex = self._assert_single_item(task_execs, name='task1') with_items_ctx = task1_ex.runtime_context['with_items'] self.assertEqual(3, with_items_ctx['count']) # Since we know that we can receive results in random order, # check is not depend on order of items. with db_api.transaction(): task1_ex = db_api.get_task_execution(task1_ex.id) result = data_flow.get_task_execution_result(task1_ex) self.assertIsInstance(result, list) self.assertIn('John', result) self.assertIn('Ivan', result) self.assertIn('Mistral', result) published = task1_ex.published self.assertIn(published['result'], ['John', 'Ivan', 'Mistral']) self.assertEqual(1, len(task_execs)) self.assertEqual(states.SUCCESS, task1_ex.state) def test_with_items_fail(self): wf_text = """--- version: "2.0" wf: type: direct tasks: task1: with-items: i in [1, 2, 3] action: std.fail on-error: task2 task2: action: std.echo output="With-items failed" """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(2, len(wf_ex.task_executions)) def test_with_items_yaql_fail(self): wf_text = """--- version: "2.0" wf: type: direct tasks: task1: with-items: i in <% $.foobar %> action: std.noop """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') result = data_flow.get_task_execution_result(task1) self.assertEqual(states.ERROR, task1.state) self.assertIsInstance(result, list) self.assertListEqual(result, []) def test_with_items_sub_workflow_fail(self): wb_text = """--- version: "2.0" name: wb1 workflows: wf: type: direct tasks: task1: with-items: i in [1, 2, 3] workflow: subwf on-error: task2 task2: action: std.echo output="With-items failed" subwf: type: direct tasks: fail-task: action: std.fail """ wb_service.create_workbook_v2(wb_text) # Start workflow. wf_ex = self.engine.start_workflow('wb1.wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(2, len(wf_ex.task_executions)) def test_with_items_static_var(self): wb_service.create_workbook_v2(WB_WITH_STATIC_VAR) wf_input = copy.deepcopy(WF_INPUT) wf_input.update({'greeting': 'Hello'}) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf', wf_input=wf_input) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') result = data_flow.get_task_execution_result(task1) self.assertIsInstance(result, list) self.assertIn('Hello, John!', result) self.assertIn('Hello, Ivan!', result) self.assertIn('Hello, Mistral!', result) self.assertEqual(1, len(tasks)) self.assertEqual(states.SUCCESS, task1.state) def test_with_items_multi_array(self): wb_service.create_workbook_v2(WB_MULTI_ARRAY) wf_input = {'arrayI': ['a', 'b', 'c'], 'arrayJ': [1, 2, 3]} # Start workflow. wf_ex = self.engine.start_workflow('wb.wf', wf_input=wf_input) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task1_ex = self._assert_single_item(task_execs, name='task1') # Since we know that we can receive results in random order, # check is not depend on order of items. result = data_flow.get_task_execution_result(task1_ex) self.assertIsInstance(result, list) self.assertIn('a 1', result) self.assertIn('b 2', result) self.assertIn('c 3', result) self.assertEqual(1, len(task_execs)) self.assertEqual(states.SUCCESS, task1_ex.state) def test_with_items_action_context(self): wb_service.create_workbook_v2(WB_ACTION_CONTEXT) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf', wf_input=WF_INPUT_URLS) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] act_exs = task_ex.executions self.engine.on_action_complete( act_exs[0].id, actions_base.Result("Ivan") ) self.engine.on_action_complete( act_exs[1].id, actions_base.Result("John") ) self.engine.on_action_complete( act_exs[2].id, actions_base.Result("Mistral") ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) result = data_flow.get_task_execution_result(task_ex) self.assertIsInstance(result, list) self.assertIn('John', result) self.assertIn('Ivan', result) self.assertIn('Mistral', result) self.assertEqual(states.SUCCESS, task_ex.state) def test_with_items_empty_list(self): wb_text = """--- version: "2.0" name: wb1 workflows: with_items: type: direct input: - names_info tasks: task1: with-items: name_info in <% $.names_info %> action: std.echo output=<% $.name_info.name %> on-success: - task2 task2: action: std.echo output="Hi!" """ wb_service.create_workbook_v2(wb_text) # Start workflow. wf_input = {'names_info': []} wf_ex = self.engine.start_workflow('wb1.with_items', wf_input=wf_input) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task1_ex = self._assert_single_item(task_execs, name='task1') task2_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(2, len(task_execs)) self.assertEqual(states.SUCCESS, task1_ex.state) self.assertEqual(states.SUCCESS, task2_ex.state) def test_with_items_plain_list(self): wb_text = """--- version: "2.0" name: wb1 workflows: with_items: type: direct tasks: task1: with-items: i in [1, 2, 3] action: std.echo output=<% $.i %> """ wb_service.create_workbook_v2(wb_text) # Start workflow. wf_ex = self.engine.start_workflow('wb1.with_items') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task1_ex = self._assert_single_item( wf_ex.task_executions, name='task1', state=states.SUCCESS ) result = data_flow.get_task_execution_result(task1_ex) # Since we know that we can receive results in random order, # check is not depend on order of items. self.assertIn(1, result) self.assertIn(2, result) self.assertIn(3, result) def test_with_items_plain_list_wrong(self): wb_text = """--- version: "2.0" name: wb1 workflows: with_items: type: direct tasks: task1: with-items: i in [1,,3] action: std.echo output=<% $.i %> """ exception = self.assertRaises( exc.InvalidModelException, wb_service.create_workbook_v2, wb_text ) self.assertIn("Invalid array in 'with-items'", str(exception)) def test_with_items_results_order(self): wb_text = """--- version: "2.0" name: wb1 workflows: with_items: type: direct tasks: task1: with-items: i in [1, 2, 3] action: sleep_echo output=<% $.i %> publish: one_two_three: <% task(task1).result %> """ # Register random sleep action in the DB. test_base.register_action_class('sleep_echo', RandomSleepEchoAction) wb_service.create_workbook_v2(wb_text) # Start workflow. wf_ex = self.engine.start_workflow('wb1.with_items') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task1_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) published = task1_ex.published # Now we can check order of results explicitly. self.assertEqual([1, 2, 3], published['one_two_three']) def test_with_items_results_one_item_as_list(self): wb_service.create_workbook_v2(WB) # Start workflow. wf_ex = self.engine.start_workflow('wb.wf', wf_input=WF_INPUT_ONE_ITEM) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task1_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) result = data_flow.get_task_execution_result(task1_ex) self.assertIsInstance(result, list) self.assertIn('Guy', result) self.assertIn(task1_ex.published['result'], ['Guy']) def test_with_items_concurrency_1(self): wf_with_concurrency_1 = """--- version: "2.0" wf: input: - names: ["John", "Ivan", "Mistral"] tasks: task1: action: std.async_noop with-items: name in <% $.names %> concurrency: 1 """ wf_service.create_workflows(wf_with_concurrency_1) # Start workflow. wf_ex = self.engine.start_workflow('wf') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) # Also initialize lazy collections. task_ex = wf_ex.task_executions[0] self._assert_capacity(0, task_ex) self.assertEqual(1, self._get_running_actions_count(task_ex)) # 1st iteration complete. self.engine.on_action_complete( self._get_incomplete_action(task_ex).id, actions_base.Result("John") ) # Wait till the delayed on_action_complete is processed. # 1 is always there to periodically check WF completion. self._await(lambda: len(db_api.get_delayed_calls()) == 1) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(0, task_ex) self.assertEqual(1, self._get_running_actions_count(task_ex)) # 2nd iteration complete. self.engine.on_action_complete( self._get_incomplete_action(task_ex).id, actions_base.Result("Ivan") ) self._await(lambda: len(db_api.get_delayed_calls()) == 1) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(0, task_ex) self.assertEqual(1, self._get_running_actions_count(task_ex)) # 3rd iteration complete. self.engine.on_action_complete( self._get_incomplete_action(task_ex).id, actions_base.Result("Mistral") ) self._await(lambda: len(db_api.get_delayed_calls()) in (0, 1)) task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(1, task_ex) self.await_workflow_success(wf_ex.id) # Since we know that we can receive results in random order, # the check does not depend on order of items. with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) result = data_flow.get_task_execution_result(task_ex) self.assertIsInstance(result, list) self.assertIn('John', result) self.assertIn('Ivan', result) self.assertIn('Mistral', result) self.assertEqual(states.SUCCESS, task_ex.state) def test_with_items_concurrency_yaql(self): # TODO(rakhmerov): This test passes even with broken 'concurrency'. # The idea of the test is not fully clear. wf_text = """--- version: "2.0" wf: type: direct input: - names: ["John", "Ivan", "Mistral"] - concurrency tasks: task1: action: std.echo output=<% $.name %> with-items: name in <% $.names %> concurrency: <% $.concurrency %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf', wf_input={'concurrency': 2}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] self.assertEqual(states.SUCCESS, task_ex.state) result = data_flow.get_task_execution_result(task_ex) self.assertIsInstance(result, list) # Since we know that we can receive results in random order, # the check does not depend on order of items. self.assertIn('John', result) self.assertIn('Ivan', result) self.assertIn('Mistral', result) def test_with_items_concurrency_yaql_wrong_type(self): wf_with_concurrency_yaql = """--- version: "2.0" wf: type: direct input: - names: ["John", "Ivan", "Mistral"] - concurrency tasks: task1: action: std.echo output=<% $.name %> with-items: name in <% $.names %> concurrency: <% $.concurrency %> """ wf_service.create_workflows(wf_with_concurrency_yaql) # Start workflow. wf_ex = self.engine.start_workflow('wf', wf_input={'concurrency': '2'}) self.assertIn( 'Invalid data type in ConcurrencyPolicy', wf_ex.state_info ) self.assertEqual(states.ERROR, wf_ex.state) def test_with_items_concurrency_2(self): wf_with_concurrency_2 = """--- version: "2.0" wf: type: direct input: - names: ["John", "Ivan", "Mistral", "Hello"] tasks: task1: action: std.async_noop with-items: name in <% $.names %> concurrency: 2 """ wf_service.create_workflows(wf_with_concurrency_2) # Start workflow. wf_ex = self.engine.start_workflow('wf') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] running_cnt = self._get_running_actions_count(task_ex) self._assert_capacity(0, task_ex) self.assertEqual(2, running_cnt) # 1st iteration complete. self.engine.on_action_complete( self._get_incomplete_action(task_ex).id, actions_base.Result("John") ) # Wait till the delayed on_action_complete is processed. # 1 is always there to periodically check WF completion. self._await(lambda: len(db_api.get_delayed_calls()) == 1) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) running_cnt = self._get_running_actions_count(task_ex) self._assert_capacity(0, task_ex) self.assertEqual(2, running_cnt) # 2nd iteration complete. self.engine.on_action_complete( self._get_incomplete_action(task_ex).id, actions_base.Result("Ivan") ) self._await(lambda: len(db_api.get_delayed_calls()) == 1) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) running_cnt = self._get_running_actions_count(task_ex) self._assert_capacity(0, task_ex) self.assertEqual(2, running_cnt) # 3rd iteration complete. self.engine.on_action_complete( self._get_incomplete_action(task_ex).id, actions_base.Result("Mistral") ) self._await(lambda: len(db_api.get_delayed_calls()) == 1) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(1, task_ex) incomplete_action = self._get_incomplete_action(task_ex) # 4th iteration complete. self.engine.on_action_complete( incomplete_action.id, actions_base.Result("Hello") ) self._await(lambda: len(db_api.get_delayed_calls()) in (0, 1)) task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(2, task_ex) self.await_workflow_success(wf_ex.id) # Since we know that we can receive results in random order, # check is not depend on order of items. with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) result = data_flow.get_task_execution_result(task_ex) self.assertIsInstance(result, list) self.assertIn('John', result) self.assertIn('Ivan', result) self.assertIn('Mistral', result) self.assertIn('Hello', result) self.assertEqual(states.SUCCESS, task_ex.state) def test_with_items_concurrency_2_fail(self): wf_with_concurrency_2_fail = """--- version: "2.0" concurrency_test_fail: type: direct tasks: task1: with-items: i in [1, 2, 3, 4] action: std.fail concurrency: 2 on-error: task2 task2: action: std.echo output="With-items failed" """ wf_service.create_workflows(wf_with_concurrency_2_fail) # Start workflow. wf_ex = self.engine.start_workflow('concurrency_test_fail') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_exs = wf_ex.task_executions self.assertEqual(2, len(task_exs)) task_2 = self._assert_single_item(task_exs, name='task2') with db_api.transaction(): task_2 = db_api.get_task_execution(task_2.id) result = data_flow.get_task_execution_result(task_2) self.assertEqual('With-items failed', result) def test_with_items_concurrency_3(self): wf_with_concurrency_3 = """--- version: "2.0" concurrency_test: type: direct input: - names: ["John", "Ivan", "Mistral"] tasks: task1: action: std.async_noop with-items: name in <% $.names %> concurrency: 3 """ wf_service.create_workflows(wf_with_concurrency_3) # Start workflow. wf_ex = self.engine.start_workflow('concurrency_test') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] running_cnt = self._get_running_actions_count(task_ex) self._assert_capacity(0, task_ex) self.assertEqual(3, running_cnt) # 1st iteration complete. self.engine.on_action_complete( self._get_incomplete_action(task_ex).id, actions_base.Result("John") ) # Wait till the delayed on_action_complete is processed. # 1 is always there to periodically check WF completion. self._await(lambda: len(db_api.get_delayed_calls()) == 1) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(1, task_ex) incomplete_action = self._get_incomplete_action(task_ex) # 2nd iteration complete. self.engine.on_action_complete( incomplete_action.id, actions_base.Result("Ivan") ) self._await(lambda: len(db_api.get_delayed_calls()) == 1) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(2, task_ex) incomplete_action = self._get_incomplete_action(task_ex) # 3rd iteration complete. self.engine.on_action_complete( incomplete_action.id, actions_base.Result("Mistral") ) self._await(lambda: len(db_api.get_delayed_calls()) in (0, 1)) task_ex = db_api.get_task_execution(task_ex.id) self._assert_capacity(3, task_ex) self.await_workflow_success(wf_ex.id) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) self.assertEqual(states.SUCCESS, task_ex.state) # Since we know that we can receive results in random order, # check is not depend on order of items. result = data_flow.get_task_execution_result(task_ex) self.assertIsInstance(result, list) self.assertIn('John', result) self.assertIn('Ivan', result) self.assertIn('Mistral', result) def test_with_items_concurrency_gt_list_length(self): # TODO(rakhmerov): This test passes even with disabled 'concurrency' # support. Make sure it's valid. wf_definition = """--- version: "2.0" concurrency_test: type: direct input: - names: ["John", "Ivan"] tasks: task1: with-items: name in <% $.names %> action: std.echo output=<% $.name %> concurrency: 3 """ wf_service.create_workflows(wf_definition) # Start workflow. wf_ex = self.engine.start_workflow('concurrency_test') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) result = data_flow.get_task_execution_result(task_ex) self.assertIsInstance(result, list) self.assertIn('John', result) self.assertIn('Ivan', result) def test_with_items_retry_policy(self): wf_text = """--- version: "2.0" with_items_retry: tasks: task1: with-items: i in [1, 2] action: std.fail retry: count: 1 delay: 1 on-error: task2 task2: action: std.echo output="With-items failed" """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('with_items_retry') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) task1_ex = self._assert_single_item(task_execs, name='task1') task1_executions = task1_ex.executions self.assertEqual( 1, task1_ex.runtime_context['retry_task_policy']['retry_no'] ) self.assertEqual(4, len(task1_executions)) self._assert_multiple_items(task1_executions, 2, accepted=True) def test_with_items_concurrency_retry_policy(self): wf_text = """--- version: "2.0" wf: tasks: task1: with-items: i in [1, 2] action: std.fail retry: count: 2 delay: 1 concurrency: 2 on-error: task2 task2: action: std.echo output="With-items failed" """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(2, len(task_execs)) task1_ex = self._assert_single_item(task_execs, name='task1') with db_api.transaction(): task1_ex = db_api.get_task_execution(task1_ex.id) task1_execs = task1_ex.executions self.assertEqual(6, len(task1_execs)) self._assert_multiple_items(task1_execs, 2, accepted=True) def test_with_items_env(self): wf_text = """--- version: "2.0" wf: tasks: task1: with-items: i in [1, 2, 3, 4] action: std.echo output="<% $.i %>.<% env().name %>" """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf', env={'name': 'Mistral'}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task1 = self._assert_single_item( wf_ex.task_executions, name='task1' ) result = data_flow.get_task_execution_result(task1) self.assertEqual( [ "1.Mistral", "2.Mistral", "3.Mistral", "4.Mistral" ], result ) self.assertEqual(states.SUCCESS, task1.state) def test_with_items_two_tasks_second_starts_on_success(self): wb_text = """--- version: "2.0" name: wb1 workflows: with_items: type: direct tasks: task1: with-items: i in [1, 2] action: std.echo output=<% $.i %> on-success: task2 task2: with-items: i in [3, 4] action: std.echo output=<% $.i %> """ wb_service.create_workbook_v2(wb_text) # Start workflow. wf_ex = self.engine.start_workflow('wb1.with_items') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task1_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) task2_ex = self._assert_single_item( task_execs, name='task2', state=states.SUCCESS ) with db_api.transaction(): task1_ex = db_api.get_task_execution(task1_ex.id) task2_ex = db_api.get_task_execution(task2_ex.id) result_task1 = data_flow.get_task_execution_result(task1_ex) result_task2 = data_flow.get_task_execution_result(task2_ex) # Since we know that we can receive results in random order, # check is not depend on order of items. self.assertIn(1, result_task1) self.assertIn(2, result_task1) self.assertIn(3, result_task2) self.assertIn(4, result_task2) def test_with_items_subflow_concurrency_gt_list_length(self): wb_text = """--- version: "2.0" name: wb1 workflows: main: type: direct input: - names tasks: task1: with-items: name in <% $.names %> workflow: subflow1 name=<% $.name %> concurrency: 3 subflow1: type: direct input: - name output: result: <% task(task1).result %> tasks: task1: action: std.echo output=<% $.name %> """ wb_service.create_workbook_v2(wb_text) # Start workflow. names = ["Peter", "Susan", "Edmund", "Lucy", "Aslan", "Caspian"] wf_ex = self.engine.start_workflow( 'wb1.main', wf_input={'names': names} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) with db_api.transaction(): task_ex = db_api.get_task_execution(task_ex.id) task_result = data_flow.get_task_execution_result(task_ex) result = [item['result'] for item in task_result] self.assertListEqual(sorted(result), sorted(names)) @mock.patch.object(std_actions.HTTPAction, 'run') def test_with_items_and_adhoc_action(self, mock_http_action): mock_http_action.return_value = '' wb_text = """--- version: "2.0" name: test actions: http: input: - url: http://www.example.com - method: GET - timeout: 10 output: <% $.content %> base: std.http base-input: url: <% $.url %> method: <% $.method %> timeout: <% $.timeout %> workflows: with_items_default_bug: description: Re-create the with-items bug with default values type: direct tasks: get_pages: with-items: page in <% range(0, 1) %> action: test.http input: url: http://www.example.com method: GET on-success: - well_done well_done: action: std.echo output="Well done" """ wb_service.create_workbook_v2(wb_text) # Start workflow. wf_ex = self.engine.start_workflow('test.with_items_default_bug') self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task1_ex = self._assert_single_item(task_execs, name='get_pages') task2_ex = self._assert_single_item(task_execs, name='well_done') self.assertEqual(2, len(task_execs)) self.assertEqual(states.SUCCESS, task1_ex.state) self.assertEqual(states.SUCCESS, task2_ex.state) self.assertEqual(1, mock_http_action.call_count) mistral-6.0.0/mistral/tests/unit/engine/test_profiler.py0000666000175100017510000000566013245513262023524 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from oslo_utils import uuidutils import osprofiler from mistral import context from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') cfg.CONF.set_default('enabled', True, group='profiler') cfg.CONF.set_default('hmac_keys', 'foobar', group='profiler') cfg.CONF.set_default('profiler_log_name', 'profile_trace', group='profiler') class EngineProfilerTest(base.EngineTestCase): def setUp(self): super(EngineProfilerTest, self).setUp() # Configure the osprofiler. self.mock_profiler_log_func = mock.Mock(return_value=None) osprofiler.notifier.set(self.mock_profiler_log_func) self.ctx_serializer = context.RpcContextSerializer() def test_profile_trace(self): wf_def = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Peace!" """ wf_service.create_workflows(wf_def) ctx = { 'trace_info': { 'hmac_key': cfg.CONF.profiler.hmac_keys, 'base_id': uuidutils.generate_uuid(), 'parent_id': uuidutils.generate_uuid() } } self.ctx_serializer.deserialize_context(ctx) wf_ex = self.engine_client.start_workflow('wf') self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex['state']) self.await_workflow_success(wf_ex['id']) self.assertGreater(self.mock_profiler_log_func.call_count, 0) def test_no_profile_trace(self): wf_def = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Peace!" """ wf_service.create_workflows(wf_def) self.ctx_serializer.deserialize_context({}) wf_ex = self.engine_client.start_workflow('wf') self.assertIsNotNone(wf_ex) self.assertEqual(states.RUNNING, wf_ex['state']) self.await_workflow_success(wf_ex['id']) self.assertEqual(self.mock_profiler_log_func.call_count, 0) mistral-6.0.0/mistral/tests/unit/engine/test_subworkflows.py0000666000175100017510000002743113245513262024451 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.actions import std_actions from mistral import context as auth_context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WB1 = """ --- version: '2.0' name: wb1 workflows: wf1: type: reverse input: - param1 - param2 output: final_result: <% $.final_result %> tasks: task1: action: std.echo output=<% $.param1 %> publish: result1: <% task(task1).result %> task2: action: std.echo output="'<% $.param1 %> & <% $.param2 %>'" publish: final_result: <% task(task2).result %> requires: [task1] wf2: type: direct output: slogan: <% $.slogan %> tasks: task1: workflow: wf1 param1='Bonnie' param2='Clyde' task_name='task2' publish: slogan: "<% task(task1).result.final_result %> is a cool movie!" """ WB2 = """ --- version: '2.0' name: wb2 workflows: wf1: type: direct tasks: task1: workflow: wf2 wf2: type: direct output: var1: <% $.does_not_exist %> tasks: task1: action: std.noop """ WB3 = """ --- version: '2.0' name: wb3 workflows: wf1: input: - wf_name output: sub_wf_out: <% $.sub_wf_out %> tasks: task1: workflow: <% $.wf_name %> publish: sub_wf_out: <% task(task1).result.sub_wf_out %> wf2: output: sub_wf_out: wf2_out tasks: task1: action: std.noop """ WB4 = """ --- version: '2.0' name: wb4 workflows: wf1: input: - wf_name - inp output: sub_wf_out: <% $.sub_wf_out %> tasks: task1: workflow: <% $.wf_name %> input: <% $.inp %> publish: sub_wf_out: <% task(task1).result.sub_wf_out %> wf2: input: - inp output: sub_wf_out: <% $.inp %> tasks: task1: action: std.noop """ WB5 = """ --- version: '2.0' name: wb5 workflows: wf1: input: - wf_name - inp output: sub_wf_out: '{{ _.sub_wf_out }}' tasks: task1: workflow: '{{ _.wf_name }}' input: '{{ _.inp }}' publish: sub_wf_out: '{{ task("task1").result.sub_wf_out }}' wf2: input: - inp output: sub_wf_out: '{{ _.inp }}' tasks: task1: action: std.noop """ WB6 = """ --- version: '2.0' name: wb6 workflows: wf1: tasks: task1: workflow: wf2 wf2: tasks: task1: workflow: wf3 wf3: tasks: task1: action: std.noop """ class SubworkflowsTest(base.EngineTestCase): def setUp(self): super(SubworkflowsTest, self).setUp() wb_service.create_workbook_v2(WB1) wb_service.create_workbook_v2(WB2) wb_service.create_workbook_v2(WB3) wb_service.create_workbook_v2(WB4) wb_service.create_workbook_v2(WB5) wb_service.create_workbook_v2(WB6) def test_subworkflow_success(self): wf2_ex = self.engine.start_workflow('wb1.wf2') project_id = auth_context.ctx().project_id # Execution of 'wf2'. self.assertEqual(project_id, wf2_ex.project_id) self.assertIsNotNone(wf2_ex) self.assertDictEqual({}, wf2_ex.input) self.assertDictEqual({'namespace': ''}, wf2_ex.params) self._await(lambda: len(db_api.get_workflow_executions()) == 2, 0.5, 5) wf_execs = db_api.get_workflow_executions() self.assertEqual(2, len(wf_execs)) # Execution of 'wf2'. wf1_ex = self._assert_single_item(wf_execs, name='wb1.wf1') wf2_ex = self._assert_single_item(wf_execs, name='wb1.wf2') self.assertEqual(project_id, wf1_ex.project_id) self.assertIsNotNone(wf1_ex.task_execution_id) self.assertDictContainsSubset( { 'task_name': 'task2', 'task_execution_id': wf1_ex.task_execution_id }, wf1_ex.params ) self.assertDictEqual( { 'param1': 'Bonnie', 'param2': 'Clyde' }, wf1_ex.input ) # Wait till workflow 'wf1' is completed. self.await_workflow_success(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_output = wf1_ex.output self.assertDictEqual( {'final_result': "'Bonnie & Clyde'"}, wf1_output ) # Wait till workflow 'wf2' is completed. self.await_workflow_success(wf2_ex.id, timeout=4) with db_api.transaction(): wf2_ex = db_api.get_workflow_execution(wf2_ex.id) wf2_output = wf2_ex.output self.assertDictEqual( {'slogan': "'Bonnie & Clyde' is a cool movie!"}, wf2_output ) # Check project_id in tasks. wf1_task_execs = db_api.get_task_executions( workflow_execution_id=wf1_ex.id ) wf2_task_execs = db_api.get_task_executions( workflow_execution_id=wf2_ex.id ) wf2_task1_ex = self._assert_single_item(wf1_task_execs, name='task1') wf1_task1_ex = self._assert_single_item(wf2_task_execs, name='task1') wf1_task2_ex = self._assert_single_item(wf1_task_execs, name='task2') self.assertEqual(project_id, wf2_task1_ex.project_id) self.assertEqual(project_id, wf1_task1_ex.project_id) self.assertEqual(project_id, wf1_task2_ex.project_id) @mock.patch.object(std_actions.EchoAction, 'run', mock.MagicMock(side_effect=exc.ActionException)) def test_subworkflow_error(self): self.engine.start_workflow('wb1.wf2') self._await(lambda: len(db_api.get_workflow_executions()) == 2, 0.5, 5) wf_execs = db_api.get_workflow_executions() self.assertEqual(2, len(wf_execs)) wf1_ex = self._assert_single_item(wf_execs, name='wb1.wf1') wf2_ex = self._assert_single_item(wf_execs, name='wb1.wf2') # Wait till workflow 'wf1' is completed. self.await_workflow_error(wf1_ex.id) # Wait till workflow 'wf2' is completed, its state must be ERROR. self.await_workflow_error(wf2_ex.id) def test_subworkflow_yaql_error(self): wf_ex = self.engine.start_workflow('wb2.wf1') self.await_workflow_error(wf_ex.id) wf_execs = db_api.get_workflow_executions() self.assertEqual(2, len(wf_execs)) wf2_ex = self._assert_single_item(wf_execs, name='wb2.wf2') self.assertEqual(states.ERROR, wf2_ex.state) self.assertIn('Can not evaluate YAQL expression', wf2_ex.state_info) # Ensure error message is bubbled up to the main workflow. wf1_ex = self._assert_single_item(wf_execs, name='wb2.wf1') self.assertEqual(states.ERROR, wf1_ex.state) self.assertIn('Can not evaluate YAQL expression', wf1_ex.state_info) def test_subworkflow_environment_inheritance(self): env = {'key1': 'abc'} wf2_ex = self.engine.start_workflow('wb1.wf2', env=env) # Execution of 'wf2'. self.assertIsNotNone(wf2_ex) self.assertDictEqual({}, wf2_ex.input) self.assertDictEqual( {'env': env, 'namespace': ''}, wf2_ex.params ) self._await(lambda: len(db_api.get_workflow_executions()) == 2, 0.5, 5) wf_execs = db_api.get_workflow_executions() self.assertEqual(2, len(wf_execs)) # Execution of 'wf1'. wf1_ex = self._assert_single_item(wf_execs, name='wb1.wf1') wf2_ex = self._assert_single_item(wf_execs, name='wb1.wf2') expected_start_params = { 'task_name': 'task2', 'task_execution_id': wf1_ex.task_execution_id, 'env': env } self.assertIsNotNone(wf1_ex.task_execution_id) self.assertDictContainsSubset(expected_start_params, wf1_ex.params) # Wait till workflow 'wf1' is completed. self.await_workflow_success(wf1_ex.id) # Wait till workflow 'wf2' is completed. self.await_workflow_success(wf2_ex.id) def test_dynamic_subworkflow_wf2(self): ex = self.engine.start_workflow('wb3.wf1', wf_input={'wf_name': 'wf2'}) self.await_workflow_success(ex.id) with db_api.transaction(): ex = db_api.get_workflow_execution(ex.id) self.assertEqual({'sub_wf_out': 'wf2_out'}, ex.output) def test_dynamic_subworkflow_call_failure(self): ex = self.engine.start_workflow( 'wb3.wf1', wf_input={'wf_name': 'not_existing_wf'} ) self.await_workflow_error(ex.id) with db_api.transaction(): ex = db_api.get_workflow_execution(ex.id) self.assertIn('not_existing_wf', ex.state_info) def test_dynamic_subworkflow_with_generic_input(self): self._test_dynamic_workflow_with_dict_param('wb4.wf1') def test_dynamic_subworkflow_with_jinja(self): self._test_dynamic_workflow_with_dict_param('wb5.wf1') def test_string_workflow_input_failure(self): ex = self.engine.start_workflow( 'wb4.wf1', wf_input={'wf_name': 'wf2', 'inp': 'invalid_string_input'} ) self.await_workflow_error(ex.id) with db_api.transaction(): ex = db_api.get_workflow_execution(ex.id) self.assertIn('invalid_string_input', ex.state_info) def _test_dynamic_workflow_with_dict_param(self, wf_identifier): ex = self.engine.start_workflow( wf_identifier, wf_input={'wf_name': 'wf2', 'inp': {'inp': 'abc'}} ) self.await_workflow_success(ex.id) with db_api.transaction(): ex = db_api.get_workflow_execution(ex.id) self.assertEqual({'sub_wf_out': 'abc'}, ex.output) def test_subworkflow_root_execution_id(self): self.engine.start_workflow('wb6.wf1') self._await(lambda: len(db_api.get_workflow_executions()) == 3, 0.5, 5) wf_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf_execs, name='wb6.wf1') wf2_ex = self._assert_single_item(wf_execs, name='wb6.wf2') wf3_ex = self._assert_single_item(wf_execs, name='wb6.wf3') self.assertEqual(3, len(wf_execs)) # Wait till workflow 'wf1' is completed (and all the sub-workflows # will be completed also). self.await_workflow_success(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf2_ex = db_api.get_workflow_execution(wf2_ex.id) wf3_ex = db_api.get_workflow_execution(wf3_ex.id) self.assertIsNone(wf1_ex.root_execution_id, None) self.assertEqual(wf2_ex.root_execution_id, wf1_ex.id) self.assertEqual(wf3_ex.root_execution_id, wf1_ex.id) mistral-6.0.0/mistral/tests/unit/engine/test_subworkflows_pause_resume.py0000666000175100017510000024401713245513262027227 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as ml_actions class SubworkflowPauseResumeTest(base.EngineTestCase): def test_pause_resume_cascade_down_to_subworkflow(self): workbook = """ version: '2.0' name: wb workflows: wf1: tasks: task1: workflow: wf2 on-success: - task3 task2: workflow: wf3 on-success: - task3 task3: join: all action: std.noop wf2: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop wf3: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop """ wb_service.create_workbook_v2(workbook) # Start workflow execution. wf_1_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_state(wf_1_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.RUNNING, wf_1_ex.state) self.assertEqual(2, len(wf_1_task_execs)) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) self.assertEqual(1, len(wf_1_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(wf_1_task_1_action_exs[0].id, wf_2_ex.id) self.assertEqual(1, len(wf_1_task_2_action_exs)) self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(wf_1_task_2_action_exs[0].id, wf_3_ex.id) self.assertEqual(states.RUNNING, wf_2_ex.state) self.assertEqual(1, len(wf_2_task_execs)) self.assertEqual(states.RUNNING, wf_2_task_1_ex.state) self.assertEqual(1, len(wf_2_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(1, len(wf_3_task_execs)) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(1, len(wf_3_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # Pause the main workflow. self.engine.pause_workflow(wf_1_ex.id) self.await_workflow_paused(wf_1_ex.id) self.await_workflow_paused(wf_2_ex.id) self.await_workflow_paused(wf_3_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.PAUSED, wf_2_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) self.assertEqual(states.PAUSED, wf_1_ex.state) # Resume the main workflow. self.engine.resume_workflow(wf_1_ex.id) self.await_workflow_running(wf_1_ex.id) self.await_workflow_running(wf_2_ex.id) self.await_workflow_running(wf_3_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.RUNNING, wf_2_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) self.assertEqual(states.RUNNING, wf_1_ex.state) # Complete action executions of the subworkflows. self.engine.on_action_complete( wf_2_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.engine.on_action_complete( wf_3_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.await_workflow_success(wf_2_ex.id) self.await_workflow_success(wf_3_ex.id) self.await_workflow_success(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions wf_1_task_3_ex = self._assert_single_item( wf_1_ex.task_executions, name='task3' ) # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_2_task_2_ex = self._assert_single_item( wf_2_ex.task_executions, name='task2' ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) wf_3_task_2_ex = self._assert_single_item( wf_3_ex.task_executions, name='task2' ) self.assertEqual(states.SUCCESS, wf_1_ex.state) self.assertEqual(3, len(wf_1_task_execs)) self.assertEqual(states.SUCCESS, wf_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_2_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_3_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_1_task_2_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_2_ex.state) self.assertEqual(2, len(wf_2_task_execs)) self.assertEqual(states.SUCCESS, wf_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_2_ex.state) self.assertEqual(states.SUCCESS, wf_3_ex.state) self.assertEqual(2, len(wf_3_task_execs)) self.assertEqual(states.SUCCESS, wf_3_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_2_ex.state) def test_pause_resume_cascade_up_from_subworkflow(self): workbook = """ version: '2.0' name: wb workflows: wf1: tasks: task1: workflow: wf2 on-success: - task3 task2: workflow: wf3 on-success: - task3 task3: join: all action: std.noop wf2: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop wf3: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop """ wb_service.create_workbook_v2(workbook) # Start workflow execution. wf_1_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_state(wf_1_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.RUNNING, wf_1_ex.state) self.assertEqual(2, len(wf_1_task_execs)) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) self.assertEqual(1, len(wf_1_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(wf_1_task_1_action_exs[0].id, wf_2_ex.id) self.assertEqual(1, len(wf_1_task_2_action_exs)) self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(wf_1_task_2_action_exs[0].id, wf_3_ex.id) self.assertEqual(states.RUNNING, wf_2_ex.state) self.assertEqual(1, len(wf_2_task_execs)) self.assertEqual(states.RUNNING, wf_2_task_1_ex.state) self.assertEqual(1, len(wf_2_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(1, len(wf_3_task_execs)) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(1, len(wf_3_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # Pause the subworkflow. self.engine.pause_workflow(wf_2_ex.id) self.await_workflow_paused(wf_2_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.PAUSED, wf_2_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) self.assertEqual(states.PAUSED, wf_1_ex.state) # Resume the 1st subworkflow. self.engine.resume_workflow(wf_2_ex.id) self.await_workflow_running(wf_2_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.RUNNING, wf_2_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) self.assertEqual(states.PAUSED, wf_1_ex.state) # Complete action execution of 1st subworkflow. self.engine.on_action_complete( wf_2_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.await_workflow_success(wf_2_ex.id) self.await_task_success(wf_1_task_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.SUCCESS, wf_2_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) self.assertEqual(states.PAUSED, wf_1_ex.state) # Resume the 2nd subworkflow. self.engine.resume_workflow(wf_3_ex.id) self.await_workflow_running(wf_3_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.SUCCESS, wf_2_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) self.assertEqual(states.RUNNING, wf_1_ex.state) # Complete action execution of 2nd subworkflow. self.engine.on_action_complete( wf_3_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.await_workflow_success(wf_3_ex.id) self.await_workflow_success(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions wf_1_task_3_ex = self._assert_single_item( wf_1_ex.task_executions, name='task3' ) # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_2_task_2_ex = self._assert_single_item( wf_2_ex.task_executions, name='task2' ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) wf_3_task_2_ex = self._assert_single_item( wf_3_ex.task_executions, name='task2' ) self.assertEqual(states.SUCCESS, wf_1_ex.state) self.assertEqual(3, len(wf_1_task_execs)) self.assertEqual(states.SUCCESS, wf_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_2_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_3_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_1_task_2_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_2_ex.state) self.assertEqual(2, len(wf_2_task_execs)) self.assertEqual(states.SUCCESS, wf_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_2_ex.state) self.assertEqual(states.SUCCESS, wf_3_ex.state) self.assertEqual(2, len(wf_3_task_execs)) self.assertEqual(states.SUCCESS, wf_3_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_2_ex.state) def test_pause_resume_cascade_down_to_with_items_subworkflows(self): workbook = """ version: '2.0' name: wb workflows: wf1: tasks: task1: with-items: i in <% range(3) %> workflow: wf2 on-success: - task3 task2: workflow: wf3 on-success: - task3 task3: join: all action: std.noop wf2: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop wf3: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop """ wb_service.create_workbook_v2(workbook) # Start workflow execution. wf_1_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_state(wf_1_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_execs = wf_2_ex_1.task_executions wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_execs = wf_2_ex_2.task_executions wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_execs = wf_2_ex_3.task_executions wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.RUNNING, wf_1_ex.state) self.assertEqual(2, len(wf_1_task_execs)) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) self.assertEqual(3, len(wf_1_task_1_action_exs)) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(wf_1_task_1_action_exs[0].id, wf_2_ex_1.id) self.assertEqual(states.RUNNING, wf_2_ex_1.state) self.assertEqual(1, len(wf_2_ex_1_task_execs)) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_ex.state) self.assertEqual(1, len(wf_2_ex_1_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[1].state) self.assertEqual(wf_1_task_1_action_exs[1].id, wf_2_ex_2.id) self.assertEqual(states.RUNNING, wf_2_ex_2.state) self.assertEqual(1, len(wf_2_ex_2_task_execs)) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_ex.state) self.assertEqual(1, len(wf_2_ex_2_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[2].state) self.assertEqual(wf_1_task_1_action_exs[2].id, wf_2_ex_3.id) self.assertEqual(states.RUNNING, wf_2_ex_3.state) self.assertEqual(1, len(wf_2_ex_3_task_execs)) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_ex.state) self.assertEqual(1, len(wf_2_ex_3_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(1, len(wf_1_task_2_action_exs)) self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(wf_1_task_2_action_exs[0].id, wf_3_ex.id) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(1, len(wf_3_task_execs)) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(1, len(wf_3_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # Pause the main workflow. self.engine.pause_workflow(wf_1_ex.id) self.await_workflow_paused(wf_2_ex_1.id) self.await_workflow_paused(wf_2_ex_2.id) self.await_workflow_paused(wf_2_ex_3.id) self.await_workflow_paused(wf_3_ex.id) self.await_task_paused(wf_1_task_1_ex.id) self.await_task_paused(wf_1_task_2_ex.id) self.await_workflow_paused(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_execs = wf_2_ex_1.task_executions wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_execs = wf_2_ex_2.task_executions wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_execs = wf_2_ex_3.task_executions wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.PAUSED, wf_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_2_ex_1.state) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[1].state) self.assertEqual(states.PAUSED, wf_2_ex_2.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[2].state) self.assertEqual(states.PAUSED, wf_2_ex_3.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # Resume the main workflow. self.engine.resume_workflow(wf_1_ex.id) self.await_workflow_running(wf_2_ex_1.id) self.await_workflow_running(wf_2_ex_2.id) self.await_workflow_running(wf_2_ex_3.id) self.await_workflow_running(wf_3_ex.id) self.await_task_running(wf_1_task_1_ex.id) self.await_task_running(wf_1_task_2_ex.id) self.await_workflow_running(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_execs = wf_2_ex_1.task_executions wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_execs = wf_2_ex_2.task_executions wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_execs = wf_2_ex_3.task_executions wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.RUNNING, wf_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_2_ex_1.state) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[1].state) self.assertEqual(states.RUNNING, wf_2_ex_2.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[2].state) self.assertEqual(states.RUNNING, wf_2_ex_3.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # Complete action execution of subworkflows. self.engine.on_action_complete( wf_2_ex_1_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.engine.on_action_complete( wf_2_ex_2_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.engine.on_action_complete( wf_2_ex_3_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.engine.on_action_complete( wf_3_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.await_workflow_success(wf_2_ex_1.id) self.await_workflow_success(wf_2_ex_2.id) self.await_workflow_success(wf_2_ex_3.id) self.await_workflow_success(wf_3_ex.id) self.await_task_success(wf_1_task_1_ex.id) self.await_task_success(wf_1_task_2_ex.id) self.await_workflow_success(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_execs = wf_2_ex_1.task_executions wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_execs = wf_2_ex_2.task_executions wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_execs = wf_2_ex_3.task_executions wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.SUCCESS, wf_1_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_2_ex.state) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_2_ex_1.state) self.assertEqual(states.SUCCESS, wf_2_ex_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[1].state) self.assertEqual(states.SUCCESS, wf_2_ex_2.state) self.assertEqual(states.SUCCESS, wf_2_ex_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[2].state) self.assertEqual(states.SUCCESS, wf_2_ex_3.state) self.assertEqual(states.SUCCESS, wf_2_ex_3_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_2_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_3_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_1_action_exs[0].state) def test_pause_resume_cascade_up_from_with_items_subworkflow(self): workbook = """ version: '2.0' name: wb workflows: wf1: tasks: task1: with-items: i in <% range(3) %> workflow: wf2 on-success: - task3 task2: workflow: wf3 on-success: - task3 task3: join: all action: std.noop wf2: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop wf3: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop """ wb_service.create_workbook_v2(workbook) # Start workflow execution. wf_1_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_state(wf_1_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_execs = wf_2_ex_1.task_executions wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_execs = wf_2_ex_2.task_executions wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_execs = wf_2_ex_3.task_executions wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.RUNNING, wf_1_ex.state) self.assertEqual(2, len(wf_1_task_execs)) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) self.assertEqual(3, len(wf_1_task_1_action_exs)) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(wf_1_task_1_action_exs[0].id, wf_2_ex_1.id) self.assertEqual(states.RUNNING, wf_2_ex_1.state) self.assertEqual(1, len(wf_2_ex_1_task_execs)) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_ex.state) self.assertEqual(1, len(wf_2_ex_1_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[1].state) self.assertEqual(wf_1_task_1_action_exs[1].id, wf_2_ex_2.id) self.assertEqual(states.RUNNING, wf_2_ex_2.state) self.assertEqual(1, len(wf_2_ex_2_task_execs)) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_ex.state) self.assertEqual(1, len(wf_2_ex_2_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[2].state) self.assertEqual(wf_1_task_1_action_exs[2].id, wf_2_ex_3.id) self.assertEqual(states.RUNNING, wf_2_ex_3.state) self.assertEqual(1, len(wf_2_ex_3_task_execs)) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_ex.state) self.assertEqual(1, len(wf_2_ex_3_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(1, len(wf_1_task_2_action_exs)) self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(wf_1_task_2_action_exs[0].id, wf_3_ex.id) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(1, len(wf_3_task_execs)) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(1, len(wf_3_task_1_action_exs)) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # Pause one of the subworkflows in the with-items task. self.engine.pause_workflow(wf_2_ex_1.id) self.await_workflow_paused(wf_2_ex_1.id) self.await_workflow_paused(wf_2_ex_2.id) self.await_workflow_paused(wf_2_ex_3.id) self.await_workflow_paused(wf_3_ex.id) self.await_task_paused(wf_1_task_1_ex.id) self.await_task_paused(wf_1_task_2_ex.id) self.await_workflow_paused(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.PAUSED, wf_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_2_ex_1.state) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[1].state) self.assertEqual(states.PAUSED, wf_2_ex_2.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[2].state) self.assertEqual(states.PAUSED, wf_2_ex_3.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # NOTE(rakhmerov): Since cascade pausing is not atomic we need # to make sure that all internal operations related to pausing # one of workflow executions 'wb.wf2' are completed. So we have # to look if any "_on_action_update" calls are scheduled. def _predicate(): return all( [ '_on_action_update' not in c.target_method_name for c in db_api.get_delayed_calls() ] ) self._await(_predicate) # Resume one of the subworkflows in the with-items task. self.engine.resume_workflow(wf_2_ex_1.id) self.await_workflow_running(wf_2_ex_1.id) self.await_workflow_paused(wf_2_ex_2.id) self.await_workflow_paused(wf_2_ex_3.id) self.await_workflow_paused(wf_3_ex.id) self.await_task_paused(wf_1_task_1_ex.id) self.await_task_paused(wf_1_task_2_ex.id) self.await_workflow_paused(wf_1_ex.id) # Complete action execution of the subworkflow that is resumed. self.engine.on_action_complete( wf_2_ex_1_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.await_workflow_success(wf_2_ex_1.id) self.await_workflow_paused(wf_2_ex_2.id) self.await_workflow_paused(wf_2_ex_3.id) self.await_workflow_paused(wf_3_ex.id) self.await_task_paused(wf_1_task_1_ex.id) self.await_task_paused(wf_1_task_2_ex.id) self.await_workflow_paused(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_execs = wf_2_ex_3.task_executions wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.PAUSED, wf_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_2_ex_1.state) self.assertEqual(states.SUCCESS, wf_2_ex_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[1].state) self.assertEqual(states.PAUSED, wf_2_ex_2.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[2].state) self.assertEqual(states.PAUSED, wf_2_ex_3.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) # Resume one of the remaining subworkflows. self.engine.resume_workflow(wf_2_ex_2.id) self.engine.resume_workflow(wf_2_ex_3.id) self.engine.resume_workflow(wf_3_ex.id) self.await_workflow_running(wf_2_ex_2.id) self.await_workflow_running(wf_2_ex_3.id) self.await_workflow_running(wf_3_ex.id) self.await_task_running(wf_1_task_1_ex.id) self.await_task_running(wf_1_task_2_ex.id) self.await_workflow_running(wf_1_ex.id) # Complete action executions of the remaining subworkflows. self.engine.on_action_complete( wf_2_ex_2_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.engine.on_action_complete( wf_2_ex_3_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.engine.on_action_complete( wf_3_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.await_workflow_success(wf_2_ex_1.id) self.await_workflow_success(wf_2_ex_2.id) self.await_workflow_success(wf_2_ex_3.id) self.await_workflow_success(wf_3_ex.id) self.await_task_success(wf_1_task_1_ex.id) self.await_task_success(wf_1_task_2_ex.id) self.await_workflow_success(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = sorted( wf_1_task_1_ex.executions, key=lambda x: x['runtime_context']['index'] ) wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the with-items subworkflow executions. wf_2_ex_1 = db_api.get_workflow_execution( wf_1_task_1_action_exs[0].id ) wf_2_ex_1_task_execs = wf_2_ex_1.task_executions wf_2_ex_1_task_1_ex = self._assert_single_item( wf_2_ex_1.task_executions, name='task1' ) wf_2_ex_1_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_1_task_1_ex.id ) wf_2_ex_2 = db_api.get_workflow_execution( wf_1_task_1_action_exs[1].id ) wf_2_ex_2_task_execs = wf_2_ex_2.task_executions wf_2_ex_2_task_1_ex = self._assert_single_item( wf_2_ex_2.task_executions, name='task1' ) wf_2_ex_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_2_task_1_ex.id ) wf_2_ex_3 = db_api.get_workflow_execution( wf_1_task_1_action_exs[2].id ) wf_2_ex_3_task_execs = wf_2_ex_3.task_executions wf_2_ex_3_task_1_ex = self._assert_single_item( wf_2_ex_3.task_executions, name='task1' ) wf_2_ex_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_ex_3_task_1_ex.id ) # Get objects for the wf3 subworkflow execution. wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) # Check state of parent workflow execution. self.assertEqual(states.SUCCESS, wf_1_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_2_ex.state) # Check state of wf2 (1) subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_2_ex_1.state) self.assertEqual(states.SUCCESS, wf_2_ex_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_ex_1_task_1_action_exs[0].state) # Check state of wf2 (2) subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[1].state) self.assertEqual(states.SUCCESS, wf_2_ex_2.state) self.assertEqual(states.SUCCESS, wf_2_ex_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_ex_2_task_1_action_exs[0].state) # Check state of wf2 (3) subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[2].state) self.assertEqual(states.SUCCESS, wf_2_ex_3.state) self.assertEqual(states.SUCCESS, wf_2_ex_3_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_ex_3_task_1_action_exs[0].state) # Check state of wf3 subworkflow execution. self.assertEqual(states.SUCCESS, wf_1_task_2_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_3_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_1_action_exs[0].state) def test_pause_resume_cascade_up_from_subworkflow_pause_before(self): workbook = """ version: '2.0' name: wb workflows: wf1: tasks: task1: workflow: wf2 on-success: - task3 task2: workflow: wf3 on-success: - task3 task3: join: all action: std.noop wf2: tasks: task1: action: std.noop on-success: - task2 task2: pause-before: true action: std.async_noop wf3: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop """ wb_service.create_workbook_v2(workbook) # Start workflow execution. wf_1_ex = self.engine.start_workflow('wb.wf1') self.await_workflow_state(wf_1_ex.id, states.PAUSED) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_2_task_2_ex = self._assert_single_item( wf_2_ex.task_executions, name='task2' ) wf_2_task_2_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_2_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.PAUSED, wf_2_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_action_exs[0].state) self.assertEqual(states.IDLE, wf_2_task_2_ex.state) self.assertEqual(0, len(wf_2_task_2_action_exs)) self.assertEqual(states.PAUSED, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_1_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_1_ex.state) self.assertEqual(states.PAUSED, wf_1_task_2_action_exs[0].state) self.assertEqual(states.PAUSED, wf_1_task_2_ex.state) self.assertEqual(states.PAUSED, wf_1_ex.state) # Resume the main workflow. self.engine.resume_workflow(wf_1_ex.id) self.await_workflow_running(wf_1_ex.id) self.await_workflow_running(wf_2_ex.id) self.await_workflow_running(wf_3_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_2_task_2_ex = self._assert_single_item( wf_2_ex.task_executions, name='task2' ) wf_2_task_2_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_2_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) self.assertEqual(states.RUNNING, wf_2_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_2_task_2_ex.state) self.assertEqual(states.RUNNING, wf_2_task_2_action_exs[0].state) self.assertEqual(states.RUNNING, wf_3_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_ex.state) self.assertEqual(states.RUNNING, wf_3_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_1_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_1_ex.state) self.assertEqual(states.RUNNING, wf_1_task_2_action_exs[0].state) self.assertEqual(states.RUNNING, wf_1_task_2_ex.state) self.assertEqual(states.RUNNING, wf_1_ex.state) # Complete action executions of the subworkflows. self.engine.on_action_complete( wf_2_task_2_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.engine.on_action_complete( wf_3_task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) self.await_workflow_success(wf_2_ex.id) self.await_workflow_success(wf_3_ex.id) self.await_workflow_success(wf_1_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() # Get objects for the parent workflow execution. wf_1_ex = self._assert_single_item(wf_execs, name='wb.wf1') wf_1_task_execs = wf_1_ex.task_executions wf_1_task_1_ex = self._assert_single_item( wf_1_ex.task_executions, name='task1' ) wf_1_task_1_action_exs = wf_1_task_1_ex.executions wf_1_task_2_ex = self._assert_single_item( wf_1_ex.task_executions, name='task2' ) wf_1_task_2_action_exs = wf_1_task_2_ex.executions wf_1_task_3_ex = self._assert_single_item( wf_1_ex.task_executions, name='task3' ) # Get objects for the subworkflow executions. wf_2_ex = self._assert_single_item(wf_execs, name='wb.wf2') wf_2_task_execs = wf_2_ex.task_executions wf_2_task_1_ex = self._assert_single_item( wf_2_ex.task_executions, name='task1' ) wf_2_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_1_ex.id ) wf_2_task_2_ex = self._assert_single_item( wf_2_ex.task_executions, name='task2' ) wf_2_task_2_action_exs = db_api.get_action_executions( task_execution_id=wf_2_task_2_ex.id ) wf_3_ex = self._assert_single_item(wf_execs, name='wb.wf3') wf_3_task_execs = wf_3_ex.task_executions wf_3_task_1_ex = self._assert_single_item( wf_3_ex.task_executions, name='task1' ) wf_3_task_1_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_1_ex.id ) wf_3_task_2_ex = self._assert_single_item( wf_3_ex.task_executions, name='task2' ) wf_3_task_2_action_exs = db_api.get_action_executions( task_execution_id=wf_3_task_2_ex.id ) self.assertEqual(states.SUCCESS, wf_1_ex.state) self.assertEqual(3, len(wf_1_task_execs)) self.assertEqual(states.SUCCESS, wf_1_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_2_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_3_ex.state) self.assertEqual(states.SUCCESS, wf_1_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_1_task_2_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_2_ex.state) self.assertEqual(2, len(wf_2_task_execs)) self.assertEqual(states.SUCCESS, wf_2_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_2_ex.state) self.assertEqual(states.SUCCESS, wf_2_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_2_task_2_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_3_ex.state) self.assertEqual(2, len(wf_3_task_execs)) self.assertEqual(states.SUCCESS, wf_3_task_1_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_2_ex.state) self.assertEqual(states.SUCCESS, wf_3_task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, wf_3_task_2_action_exs[0].state) mistral-6.0.0/mistral/tests/unit/engine/test_race_condition.py0000666000175100017510000001356313245513262024663 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from eventlet import corolocal from eventlet import semaphore from oslo_config import cfg import testtools from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit import base as test_base from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as actions_base # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WF_LONG_ACTION = """ --- version: '2.0' wf: type: direct description: | The idea is to use action that runs longer than engine.start_workflow() method. And we need to check that engine handles this situation. output: result: <% $.result %> tasks: task1: action: test.block publish: result: <% task(task1).result %> """ WF_SHORT_ACTION = """ --- version: '2.0' wf: type: direct description: | The idea is to use action that runs faster than engine.start_workflow(). And we need to check that engine handles this situation as well. This was a situation previously that led to a race condition in engine, method on_action_complete() was called while DB transaction in start_workflow() was still active (not committed yet). To emulate a short action we use a workflow with two start tasks so they run both in parallel on the first engine iteration when we call method start_workflow(). First task has a short action that just returns a predefined result and the second task blocks until the test explicitly unblocks it. So the first action will always end before start_workflow() method ends. output: result: <% $.result %> tasks: task1: action: std.echo output=1 publish: result: <% task(task1).result %> task2: action: test.block """ ACTION_SEMAPHORE = None TEST_SEMAPHORE = None class BlockingAction(actions_base.Action): def __init__(self): pass @staticmethod def unblock_test(): TEST_SEMAPHORE.release() @staticmethod def wait_for_test(): ACTION_SEMAPHORE.acquire() def run(self, context): self.unblock_test() self.wait_for_test() print('Action completed [eventlet_id=%s]' % corolocal.get_ident()) return 'test' def test(self): pass class EngineActionRaceConditionTest(base.EngineTestCase): def setUp(self): super(EngineActionRaceConditionTest, self).setUp() global ACTION_SEMAPHORE global TEST_SEMAPHORE ACTION_SEMAPHORE = semaphore.Semaphore(1) TEST_SEMAPHORE = semaphore.Semaphore(0) test_base.register_action_class('test.block', BlockingAction) @staticmethod def block_action(): ACTION_SEMAPHORE.acquire() @staticmethod def unblock_action(): ACTION_SEMAPHORE.release() @staticmethod def wait_for_action(): TEST_SEMAPHORE.acquire() def test_long_action(self): wf_service.create_workflows(WF_LONG_ACTION) self.block_action() wf_ex = self.engine.start_workflow('wf') with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(states.RUNNING, task_execs[0].state) self.wait_for_action() with db_api.transaction(): # Here's the point when the action is blocked but already running. # Do the same check again, it should always pass. wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(states.RUNNING, task_execs[0].state) self.unblock_action() self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output self.assertDictEqual({'result': 'test'}, wf_output) # TODO(rakhmerov): Should periodically fail now because of poor # transaction isolation support in SQLite. Requires more research # to understand all the details. It's not reproducible on MySql. @testtools.skip('Skip until we know how to fix it with SQLite.') def test_short_action(self): wf_service.create_workflows(WF_SHORT_ACTION) self.block_action() wf_ex = self.engine.start_workflow('wf') wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) task_execs = wf_ex.task_executions task1_ex = self._assert_single_item(task_execs, name='task1') task2_ex = self._assert_single_item( task_execs, name='task2', state=states.RUNNING ) self.await_task_success(task1_ex.id, timeout=10) self.unblock_action() self.await_task_success(task2_ex.id) self.await_workflow_success(wf_ex.id) task1_ex = db_api.get_task_execution(task1_ex.id) task1_action_ex = db_api.get_action_executions( task_execution_id=task1_ex.id )[0] self.assertEqual(1, task1_action_ex.output['result']) mistral-6.0.0/mistral/tests/unit/engine/test_workflow_variables.py0000666000175100017510000001034613245513262025601 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class WorkflowVariablesTest(base.EngineTestCase): def test_workflow_variables(self): wf_text = """--- version: '2.0' wf: input: - param1: "Hello" - param2 vars: literal_var: "Literal value" yaql_var: "<% $.param1 %> <% $.param2 %>" output: literal_var: <% $.literal_var %> yaql_var: <% $.yaql_var %> tasks: task1: action: std.noop """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf', wf_input={'param2': 'Renat'}) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') self.assertEqual(states.SUCCESS, task1.state) self.assertDictEqual( { 'literal_var': 'Literal value', 'yaql_var': 'Hello Renat' }, wf_output ) def test_dynamic_action_names(self): wf_text = """--- version: '2.0' wf2: input: - wf_action - param1 tasks: task1: action: <% $.wf_action %> output=<% $.param1 %> publish: var1: <% task(task1).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow( 'wf2', wf_input={'wf_action': 'std.echo', 'param1': 'Hello'} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual("Hello", wf_output['var1']) def test_dynamic_action_names_and_input(self): wf_text = """--- version: '2.0' wf3: input: - wf_action - wf_input tasks: task1: action: <% $.wf_action %> input: <% $.wf_input %> publish: var1: <% task(task1).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow( 'wf3', wf_input={'wf_action': 'std.echo', 'wf_input': {'output': 'Hello'}} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual("Hello", wf_output['var1']) mistral-6.0.0/mistral/tests/unit/engine/test_state_info.py0000666000175100017510000001201713245513262024027 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class ExecutionStateInfoTest(base.EngineTestCase): def test_state_info(self): workflow = """--- version: '2.0' test_wf: type: direct tasks: task1: action: std.fail task2: action: std.noop """ wf_service.create_workflows(workflow) # Start workflow. wf_ex = self.engine.start_workflow('test_wf') self.await_workflow_error(wf_ex.id) # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertIn("error in tasks: task1", wf_ex.state_info) def test_state_info_two_failed_branches(self): workflow = """--- version: '2.0' test_wf: type: direct tasks: task1: action: std.fail task2: action: std.fail """ wf_service.create_workflows(workflow) # Start workflow. wf_ex = self.engine.start_workflow('test_wf') self.await_workflow_error(wf_ex.id) # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertIn("error in tasks: task1, task2", wf_ex.state_info) def test_state_info_with_policies(self): workflow = """--- version: '2.0' test_wf: type: direct tasks: task1: action: std.fail wait-after: 1 task2: action: std.noop wait-after: 3 """ wf_service.create_workflows(workflow) # Start workflow. wf_ex = self.engine.start_workflow('test_wf') self.await_workflow_error(wf_ex.id) # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertIn("error in tasks: task1", wf_ex.state_info) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.1', # Mock task1 success for initial run. exc.ActionException(), # Mock task1 exception for initial run. 'Task 1.0', # Mock task1 success for rerun. 'Task 1.2' # Mock task1 success for rerun. ] ) ) def test_state_info_with_items(self): workflow = """--- version: '2.0' wf: type: direct tasks: t1: with-items: i in <% list(range(0, 3)) %> action: std.echo output="Task 1.<% $.i %>" """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) task_1_ex = self._assert_single_item(task_execs, name='t1') self.assertEqual(states.ERROR, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(3, len(task_1_action_exs)) error_actions = [ action_ex for action_ex in task_1_action_exs if action_ex.state == states.ERROR ] self.assertEqual(2, len(error_actions)) success_actions = [ action_ex for action_ex in task_1_action_exs if action_ex.state == states.SUCCESS ] self.assertEqual(1, len(success_actions)) for action_ex in error_actions: self.assertIn(action_ex.id, wf_ex.state_info) for action_ex in success_actions: self.assertNotIn(action_ex.id, wf_ex.state_info) mistral-6.0.0/mistral/tests/unit/engine/test_noop_task.py0000666000175100017510000000777313245513262023706 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WF = """ --- version: '2.0' wf: type: direct input: - num1 - num2 output: result: <% $.result %> tasks: task1: action: std.echo output=<% $.num1 %> publish: result1: <% task(task1).result %> on-complete: - task3 task2: action: std.echo output=<% $.num2 %> publish: result2: <% task(task2).result %> on-complete: - task3 task3: description: | This task doesn't "action" or "workflow" property. It works as "no-op" task and serves just a decision point in the workflow. join: all on-complete: - task4: <% $.num1 + $.num2 = 2 %> - task5: <% $.num1 + $.num2 = 3 %> task4: action: std.echo output=4 publish: result: <% task(task4).result %> task5: action: std.echo output=5 publish: result: <% task(task5).result %> """ class NoopTaskEngineTest(base.EngineTestCase): def test_noop_task1(self): wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow( 'wf', wf_input={'num1': 1, 'num2': 1} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions self.assertEqual(4, len(tasks)) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') task4 = self._assert_single_item(tasks, name='task4') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.SUCCESS, task2.state) self.assertEqual(states.SUCCESS, task3.state) self.assertEqual(states.SUCCESS, task4.state) self.assertDictEqual({'result': 4}, wf_output) def test_noop_task2(self): wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow( 'wf', wf_input={'num1': 1, 'num2': 2} ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) wf_output = wf_ex.output tasks = wf_ex.task_executions self.assertEqual(4, len(tasks)) task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') task5 = self._assert_single_item(tasks, name='task5') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.SUCCESS, task2.state) self.assertEqual(states.SUCCESS, task3.state) self.assertEqual(states.SUCCESS, task5.state) self.assertDictEqual({'result': 5}, wf_output) mistral-6.0.0/mistral/tests/unit/engine/test_action_defaults.py0000666000175100017510000001540313245513261025041 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg import requests from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit import base as test_base from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') ENV = { '__actions': { 'std.http': { 'auth': 'librarian:password123', 'timeout': 30, } } } EXPECTED_ENV_AUTH = ('librarian', 'password123') WORKFLOW1 = """ --- version: "2.0" wf1: type: direct tasks: task1: action: std.http url="https://api.library.org/books" publish: result: <% $ %> """ WORKFLOW2 = """ --- version: "2.0" wf2: type: direct tasks: task1: action: std.http url="https://api.library.org/books" timeout=60 publish: result: <% $ %> """ WORKFLOW1_WITH_ITEMS = """ --- version: "2.0" wf1_with_items: type: direct input: - links tasks: task1: with-items: link in <% $.links %> action: std.http url=<% $.link %> publish: result: <% $ %> """ WORKFLOW2_WITH_ITEMS = """ --- version: "2.0" wf2_with_items: type: direct input: - links tasks: task1: with-items: link in <% $.links %> action: std.http url=<% $.link %> timeout=60 publish: result: <% $ %> """ class ActionDefaultTest(base.EngineTestCase): @mock.patch.object( requests, 'request', mock.MagicMock(return_value=test_base.FakeHTTPResponse('', 200, 'OK'))) @mock.patch.object( std_actions.HTTPAction, 'is_sync', mock.MagicMock(return_value=True)) def test_action_defaults_from_env(self): wf_service.create_workflows(WORKFLOW1) wf_ex = self.engine.start_workflow('wf1', env=ENV) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.SUCCESS, wf_ex.state) self._assert_single_item(wf_ex.task_executions, name='task1') requests.request.assert_called_with( 'GET', 'https://api.library.org/books', params=None, data=None, headers=None, cookies=None, allow_redirects=None, proxies=None, verify=None, auth=EXPECTED_ENV_AUTH, timeout=ENV['__actions']['std.http']['timeout']) @mock.patch.object( requests, 'request', mock.MagicMock(return_value=test_base.FakeHTTPResponse('', 200, 'OK'))) @mock.patch.object( std_actions.HTTPAction, 'is_sync', mock.MagicMock(return_value=True)) def test_action_defaults_from_env_not_applied(self): wf_service.create_workflows(WORKFLOW2) wf_ex = self.engine.start_workflow('wf2', env=ENV) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.SUCCESS, wf_ex.state) self._assert_single_item(wf_ex.task_executions, name='task1') requests.request.assert_called_with( 'GET', 'https://api.library.org/books', params=None, data=None, headers=None, cookies=None, allow_redirects=None, proxies=None, verify=None, auth=EXPECTED_ENV_AUTH, timeout=60 ) @mock.patch.object( requests, 'request', mock.MagicMock(return_value=test_base.FakeHTTPResponse('', 200, 'OK'))) @mock.patch.object( std_actions.HTTPAction, 'is_sync', mock.MagicMock(return_value=True)) def test_with_items_action_defaults_from_env(self): wf_service.create_workflows(WORKFLOW1_WITH_ITEMS) wf_input = { 'links': [ 'https://api.library.org/books', 'https://api.library.org/authors' ] } wf_ex = self.engine.start_workflow( 'wf1_with_items', wf_input=wf_input, env=ENV ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.SUCCESS, wf_ex.state) self._assert_single_item(wf_ex.task_executions, name='task1') calls = [mock.call('GET', url, params=None, data=None, headers=None, cookies=None, allow_redirects=None, proxies=None, auth=EXPECTED_ENV_AUTH, verify=None, timeout=ENV['__actions']['std.http']['timeout']) for url in wf_input['links']] requests.request.assert_has_calls(calls, any_order=True) @mock.patch.object( requests, 'request', mock.MagicMock(return_value=test_base.FakeHTTPResponse('', 200, 'OK'))) @mock.patch.object( std_actions.HTTPAction, 'is_sync', mock.MagicMock(return_value=True)) def test_with_items_action_defaults_from_env_not_applied(self): wf_service.create_workflows(WORKFLOW2_WITH_ITEMS) wf_input = { 'links': [ 'https://api.library.org/books', 'https://api.library.org/authors' ] } wf_ex = self.engine.start_workflow( 'wf2_with_items', wf_input=wf_input, env=ENV ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.SUCCESS, wf_ex.state) self._assert_single_item(wf_ex.task_executions, name='task1') calls = [mock.call('GET', url, params=None, data=None, headers=None, cookies=None, allow_redirects=None, proxies=None, auth=EXPECTED_ENV_AUTH, verify=None, timeout=60) for url in wf_input['links']] requests.request.assert_has_calls(calls, any_order=True) mistral-6.0.0/mistral/tests/unit/engine/test_task_publish.py0000666000175100017510000000536713245513262024376 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') SIMPLE_WORKBOOK = """ --- version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.echo output="Task 1" publish: v1: <% $.t1.get($foobar) %> on-success: - t2 t2: action: std.echo output="Task 2" on-success: - t3 t3: action: std.echo output="Task 3" """ class TaskPublishTest(base.EngineTestCase): @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success. 'Task 2', # Mock task2 success. 'Task 3' # Mock task3 success. ] ) ) def test_publish_failure(self): wb_service.create_workbook_v2(SIMPLE_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertEqual(1, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') # Task 1 should have failed. self.assertEqual(states.ERROR, task_1_ex.state) self.assertIn('Can not evaluate YAQL expression', task_1_ex.state_info) # Action execution of task 1 should have succeeded. task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) mistral-6.0.0/mistral/tests/unit/engine/test_action_context.py0000666000175100017510000000443413245513261024720 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit import base as test_base from mistral.tests.unit.engine import base from mistral_lib import actions as actions_base WF = """ --- version: '2.0' wf: tasks: task1: action: my_action """ class MyAction(actions_base.Action): def run(self, context): pass class ActionContextTest(base.EngineTestCase): def setUp(self): super(ActionContextTest, self).setUp() test_base.register_action_class('my_action', MyAction) @mock.patch.object(MyAction, 'run', return_value=None) def test_context(self, mocked_run): wf_service.create_workflows(WF) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) self.assertEqual(1, len(mocked_run.call_args_list)) action_context = mocked_run.call_args[0][0] exec_context = action_context.execution with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(exec_context.workflow_execution_id, wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') a_ex = task1.action_executions[0] self.assertEqual(exec_context.task_id, task1.id) self.assertEqual(exec_context.workflow_name, wf_ex.name) callback_url = "/v2/action_executions/{}".format(a_ex.id) self.assertEqual(exec_context.callback_url, callback_url) self.assertEqual(exec_context.action_execution_id, a_ex.id) mistral-6.0.0/mistral/tests/unit/engine/test_error_handling.py0000666000175100017510000005551013245513262024676 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from oslo_db import exception as db_exc from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.utils import expression_utils from mistral.workflow import states from mistral_lib import actions as actions_base # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class InvalidUnicodeAction(actions_base.Action): def __init__(self): pass def run(self, context): return b'\xf8' def test(self): pass class ErrorHandlingEngineTest(base.EngineTestCase): def test_invalid_workflow_input(self): # Check that in case of invalid input workflow objects aren't even # created. wf_text = """ version: '2.0' wf: input: - param1 - param2 tasks: task1: action: std.noop """ wf_service.create_workflows(wf_text) self.assertRaises( exc.InputException, self.engine.start_workflow, 'wf', '', {'wrong_param': 'some_value'} ) self.assertEqual(0, len(db_api.get_workflow_executions())) self.assertEqual(0, len(db_api.get_task_executions())) self.assertEqual(0, len(db_api.get_action_executions())) def test_first_task_error(self): # Check that in case of an error in first task workflow objects are # still persisted properly. wf_text = """ version: '2.0' wf: tasks: task1: action: std.fail on-success: task2 task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNotNone(db_api.get_workflow_execution(wf_ex.id)) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item(task_execs, name='task1', state=states.ERROR) def test_action_error(self): # Check that state of all workflow objects (workflow executions, # task executions, action executions) is properly persisted in case # of action error. wf_text = """ version: '2.0' wf: tasks: task1: action: std.fail """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) self._assert_single_item(task_execs, name='task1', state=states.ERROR) def test_task_error(self): # Check that state of all workflow objects (workflow executions, # task executions, action executions) is properly persisted in case # of an error at task level. wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop publish: my_var: <% invalid_yaql_function() %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) # Now we need to make sure that task is in ERROR state but action # is in SUCCESS because error occurred in 'publish' clause which # must not affect action state. task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task_ex = self._assert_single_item( task_execs, name='task1', state=states.ERROR ) action_execs = task_ex.executions self.assertEqual(1, len(action_execs)) self._assert_single_item( action_execs, name='std.noop', state=states.SUCCESS ) def test_task_error_with_on_handlers(self): # Check that state of all workflow objects (workflow executions, # task executions, action executions) is properly persisted in case # of an error at task level and this task has on-XXX handlers. wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop publish: my_var: <% invalid_yaql_function() %> on-success: - task2 on-error: - task3 task2: description: This task must never run. action: std.noop task3: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) # Now we need to make sure that task is in ERROR state but action # is in SUCCESS because error occurred in 'publish' clause which # must not affect action state. task_execs = wf_ex.task_executions # NOTE: task3 must not run because on-error handler triggers # only on error outcome of an action (or workflow) associated # with a task. self.assertEqual(1, len(task_execs)) task_ex = self._assert_single_item( task_execs, name='task1', state=states.ERROR ) action_execs = task_ex.executions self.assertEqual(1, len(action_execs)) self._assert_single_item( action_execs, name='std.noop', state=states.SUCCESS ) def test_workflow_error(self): # Check that state of all workflow objects (workflow executions, # task executions, action executions) is properly persisted in case # of an error at task level. wf_text = """ version: '2.0' wf: output: my_output: <% $.invalid_yaql_variable %> tasks: task1: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) # Now we need to make sure that task and action are in SUCCESS # state because mistake at workflow level (output evaluation) # must not affect them. task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task_ex = self._assert_single_item( task_execs, name='task1', state=states.SUCCESS ) action_execs = task_ex.executions self.assertEqual(1, len(action_execs)) self._assert_single_item( action_execs, name='std.noop', state=states.SUCCESS ) def test_action_error_with_wait_before_policy(self): # Check that state of all workflow objects (workflow executions, # task executions, action executions) is properly persisted in case # of action error and task has 'wait-before' policy. It is an # implicit test for task continuation because 'wait-before' inserts # a delay between preparing task execution object and scheduling # actions. If an error happens during scheduling actions (e.g. # invalid YAQL in action parameters) then we also need to handle # this properly, meaning that task and workflow state should go # into ERROR state. wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output=<% invalid_yaql_function() %> wait-before: 1 """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task_ex = self._assert_single_item( task_execs, name='task1', state=states.ERROR ) action_execs = task_ex.executions self.assertEqual(0, len(action_execs)) def test_action_error_with_wait_after_policy(self): # Check that state of all workflow objects (workflow executions, # task executions, action executions) is properly persisted in case # of action error and task has 'wait-after' policy. It is an # implicit test for task completion because 'wait-after' inserts # a delay between actual task completion and logic that calculates # next workflow commands. If an error happens while calculating # next commands (e.g. invalid YAQL in on-XXX clauses) then we also # need to handle this properly, meaning that task and workflow state # should go into ERROR state. wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop wait-after: 1 on-success: - task2: <% invalid_yaql_function() %> task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(1, len(task_execs)) task_ex = self._assert_single_item( task_execs, name='task1', state=states.ERROR ) action_execs = task_ex.executions self.assertEqual(1, len(action_execs)) self._assert_single_item( action_execs, name='std.noop', state=states.SUCCESS ) def test_error_message_format_key_error(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop on-success: - succeed: <% $.invalid_yaql %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] state_info = task_ex.state_info self.assertIsNotNone(state_info) self.assertLess(state_info.find('error'), state_info.find('data')) def test_error_message_format_unknown_function(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop publish: my_var: <% invalid_yaql_function() %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] state_info = task_ex.state_info self.assertIsNotNone(state_info) self.assertGreater(state_info.find('error='), 0) self.assertLess(state_info.find('error='), state_info.find('data=')) def test_error_message_format_invalid_on_task_run(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output={{ _.invalid_var }} """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] state_info = task_ex.state_info self.assertIsNotNone(state_info) self.assertGreater(state_info.find('error='), 0) self.assertLess(state_info.find('error='), state_info.find('wf=')) def test_error_message_format_on_task_continue(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output={{ _.invalid_var }} wait-before: 1 """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] state_info = task_ex.state_info self.assertIsNotNone(state_info) self.assertGreater(state_info.find('error='), 0) self.assertLess(state_info.find('error='), state_info.find('wf=')) def test_error_message_format_on_action_complete(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop publish: my_var: <% invalid_yaql_function() %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] state_info = task_ex.state_info print(state_info) self.assertIsNotNone(state_info) self.assertGreater(state_info.find('error='), 0) self.assertLess(state_info.find('error='), state_info.find('wf=')) def test_error_message_format_complete_task(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.noop wait-after: 1 on-success: - task2: <% invalid_yaql_function() %> task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] state_info = task_ex.state_info self.assertIsNotNone(state_info) self.assertGreater(state_info.find('error='), 0) self.assertLess(state_info.find('error='), state_info.find('wf=')) def test_error_message_format_on_adhoc_action_error(self): wb_text = """ version: '2.0' name: wb actions: my_action: input: - output output: <% invalid_yaql_function() %> base: std.echo base-input: output: <% $.output %> workflows: wf: tasks: task1: action: my_action output="test" """ wb_service.create_workbook_v2(wb_text) wf_ex = self.engine.start_workflow('wb.wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] state_info = task_ex.state_info self.assertIsNotNone(state_info) self.assertGreater(state_info.find('error='), 0) self.assertLess(state_info.find('error='), state_info.find('action=')) def test_publish_bad_yaql(self): wf_text = """--- version: '2.0' wf: type: direct input: - my_dict: - id: 1 value: 11 tasks: task1: action: std.noop publish: problem_var: <% $.my_dict.where($.value = 13).id.first() %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] action_ex = task_ex.action_executions[0] self.assertEqual(states.SUCCESS, action_ex.state) self.assertEqual(states.ERROR, task_ex.state) self.assertIsNotNone(task_ex.state_info) self.assertEqual(states.ERROR, wf_ex.state) def test_publish_bad_jinja(self): wf_text = """--- version: '2.0' wf: type: direct input: - my_dict: - id: 1 value: 11 tasks: task1: action: std.noop publish: problem_var: '{{ (_.my_dict|some_invalid_filter).id }}' """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_ex = wf_ex.task_executions[0] action_ex = task_ex.action_executions[0] self.assertEqual(states.SUCCESS, action_ex.state) self.assertEqual(states.ERROR, task_ex.state) self.assertIsNotNone(task_ex.state_info) self.assertEqual(states.ERROR, wf_ex.state) def test_invalid_task_input(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop on-success: task2 task2: action: std.echo output=<% $.non_existing_function_AAA() %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(2, len(tasks)) self._assert_single_item(tasks, name='task1', state=states.SUCCESS) t2 = self._assert_single_item(tasks, name='task2', state=states.ERROR) self.assertIsNotNone(t2.state_info) self.assertIn('Can not evaluate YAQL expression', t2.state_info) self.assertIsNotNone(wf_ex.state_info) self.assertIn('Can not evaluate YAQL expression', wf_ex.state_info) def test_invalid_action_result(self): self.register_action_class( 'test.invalid_unicode_action', InvalidUnicodeAction ) wf_text = """--- version: '2.0' wf: tasks: task1: action: test.invalid_unicode_action on-success: task2 task2: action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(1, len(wf_ex.task_executions)) task_ex = wf_ex.task_executions[0] self.assertIn("UnicodeDecodeError: utf", wf_ex.state_info) self.assertIn("UnicodeDecodeError: utf", task_ex.state_info) @mock.patch( 'mistral.utils.expression_utils.get_yaql_context', mock.MagicMock( side_effect=[ db_exc.DBDeadlock(), # Emulating DB deadlock expression_utils.get_yaql_context({}) # Successful run ] ) ) def test_db_error_in_yaql_expression(self): # This test just checks that the workflow completes successfully # even if a DB deadlock occurs during YAQL expression evaluation. # The engine in this case should should just retry the transactional # method. wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output="Hello" publish: my_var: <% 1 + 1 %> """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(1, len(wf_ex.task_executions)) task_ex = wf_ex.task_executions[0] self.assertDictEqual({'my_var': 2}, task_ex.published) @mock.patch( 'mistral.utils.expression_utils.get_jinja_context', mock.MagicMock( side_effect=[ db_exc.DBDeadlock(), # Emulating DB deadlock expression_utils.get_jinja_context({}) # Successful run ] ) ) def test_db_error_in_jinja_expression(self): # This test just checks that the workflow completes successfully # even if a DB deadlock occurs during Jinja expression evaluation. # The engine in this case should should just retry the transactional # method. wf_text = """--- version: '2.0' wf: tasks: task1: action: std.echo output="Hello" publish: my_var: "{{ 1 + 1 }}" """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(1, len(wf_ex.task_executions)) task_ex = wf_ex.task_executions[0] self.assertDictEqual({'my_var': 2}, task_ex.published) mistral-6.0.0/mistral/tests/unit/engine/test_join.py0000666000175100017510000010022213245513262022627 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg import testtools from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class JoinEngineTest(base.EngineTestCase): def test_full_join_simple(self): wf_text = """--- version: '2.0' wf: type: direct tasks: join_task: join: all task1: on-success: join_task task2: on-success: join_task """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) t_execs = wf_ex.task_executions self._assert_single_item(t_execs, name='task1') self._assert_single_item(t_execs, name='task2') self._assert_single_item(t_execs, name='join_task') def test_full_join_without_errors(self): wf_text = """--- version: '2.0' wf: type: direct output: result: <% $.result3 %> tasks: task1: action: std.echo output=1 publish: result1: <% task(task1).result %> on-complete: - task3 task2: action: std.echo output=2 publish: result2: <% task(task2).result %> on-complete: - task3 task3: join: all action: std.echo output="<% $.result1 %>,<% $.result2 %>" publish: result3: <% task(task3).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual({'result': '1,2'}, wf_ex.output) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.SUCCESS, task2.state) self.assertEqual(states.SUCCESS, task3.state) def test_full_join_with_errors(self): wf_text = """--- version: '2.0' wf: type: direct output: result: <% $.result3 %> tasks: task1: action: std.echo output=1 publish: result1: <% task(task1).result %> on-complete: - task3 task2: action: std.fail on-error: - task3 task3: join: all action: std.echo output="<% $.result1 %>-<% $.result1 %>" publish: result3: <% task(task3).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual({'result': '1-1'}, wf_ex.output) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.ERROR, task2.state) self.assertEqual(states.SUCCESS, task3.state) def test_full_join_with_conditions(self): wf_text = """--- version: '2.0' wf: type: direct output: result: <% $.result4 %> tasks: task1: action: std.echo output=1 publish: result1: <% task(task1).result %> on-complete: - task3 task2: action: std.echo output=2 publish: result2: <% task(task2).result %> on-complete: - task3: <% $.result2 = 11111 %> - task4: <% $.result2 = 2 %> task3: join: all action: std.echo output="<% $.result1 %>-<% $.result1 %>" publish: result3: <% task(task3).result %> task4: action: std.echo output=4 publish: result4: <% task(task4).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') def _num_of_tasks(): return len( db_api.get_task_executions(workflow_execution_id=wf_ex.id) ) self._await(lambda: _num_of_tasks() == 4) with db_api.transaction(): # Note: We need to reread execution to access related tasks. wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') task4 = self._assert_single_item(tasks, name='task4') # NOTE(xylan): We ensure task4 is successful here because of the # uncertainty of its running in parallel with task3. self.await_task_success(task4.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.SUCCESS, task2.state) # NOTE(rakhmerov): Task 3 must fail because task2->task3 transition # will never trigger due to its condition. self.await_task_error(task3.id) self.await_workflow_error(wf_ex.id) def test_partial_join(self): wf_text = """--- version: '2.0' wf: type: direct output: result: <% $.result4 %> tasks: task1: action: std.echo output=1 publish: result1: <% task(task1).result %> on-complete: - task4 task2: action: std.echo output=2 publish: result2: <% task(task2).result %> on-complete: - task4 task3: action: std.fail description: | Always fails and 'on-success' never gets triggered. However, 'task4' will run since its join cardinality is 2 which means 'task1' and 'task2' completion is enough to trigger it. on-success: - task4 on-error: - noop task4: join: 2 action: std.echo output="<% $.result1 %>,<% $.result2 %>" publish: result4: <% task(task4).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual({'result': '1,2'}, wf_ex.output) tasks = wf_ex.task_executions self.assertEqual(4, len(tasks)) task4 = self._assert_single_item(tasks, name='task4') task1 = self._assert_single_item(tasks, name='task1') task2 = self._assert_single_item(tasks, name='task2') task3 = self._assert_single_item(tasks, name='task3') self.assertEqual(states.SUCCESS, task1.state) self.assertEqual(states.SUCCESS, task2.state) self.assertEqual(states.SUCCESS, task4.state) # task3 may still be in RUNNING state and we need to make sure # it gets into ERROR state. self.await_task_error(task3.id) self.assertDictEqual({'result4': '1,2'}, task4.published) def test_partial_join_triggers_once(self): wf_text = """--- version: '2.0' wf: type: direct output: result: <% $.result5 %> tasks: task1: action: std.noop publish: result1: 1 on-complete: - task5 task2: action: std.noop publish: result2: 2 on-complete: - task5 task3: action: std.noop publish: result3: 3 on-complete: - task5 task4: action: std.noop publish: result4: 4 on-complete: - task5 task5: join: 2 action: std.echo input: output: | <% result1 in $.keys() %>,<% result2 in $.keys() %>, <% result3 in $.keys() %>,<% result4 in $.keys() %> publish: result5: <% task(task5).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(5, len(tasks)) task5 = self._assert_single_item(tasks, name='task5') self.assertEqual(states.SUCCESS, task5.state) success_count = sum([1 for t in tasks if t.state == states.SUCCESS]) # At least task4 and two others must be successfully completed. self.assertGreaterEqual(success_count, 3) result5 = task5.published['result5'] self.assertIsNotNone(result5) # Depending on how many inbound tasks completed before 'join' # task5 started it can get different inbound context with. # But at least two inbound results should be accessible at task5 # which logically corresponds to 'join' cardinality 2. self.assertGreaterEqual(result5.count('True'), 2) def test_discriminator(self): wf_text = """--- version: '2.0' wf: type: direct output: result: <% $.result4 %> tasks: task1: action: std.noop publish: result1: 1 on-complete: - task4 task2: action: std.noop publish: result2: 2 on-complete: - task4 task3: action: std.noop publish: result3: 3 on-complete: - task4 task4: join: one action: std.echo input: output: | <% result1 in $.keys() %>,<% result2 in $.keys() %>, <% result3 in $.keys() %> publish: result4: <% task(task4).result %> """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertEqual(4, len(tasks)) task4 = self._assert_single_item(tasks, name='task4') self.assertEqual(states.SUCCESS, task4.state) success_count = sum([1 for t in tasks if t.state == states.SUCCESS]) # At least task4 and one of others must be successfully completed. self.assertGreaterEqual(success_count, 2) result4 = task4.published['result4'] self.assertIsNotNone(result4) self.assertLess(result4.count('False'), 3) self.assertGreaterEqual(result4.count('True'), 1) def test_full_join_parallel_published_vars(self): wfs_tasks_join_complex = """--- version: '2.0' main: type: direct output: var1: <% $.var1 %> var2: <% $.var2 %> is_done: <% $.is_done %> tasks: init: publish: var1: false var2: false is_done: false on-success: - branch1 - branch2 branch1: workflow: work publish: var1: true on-success: - done branch2: publish: var2: true on-success: - done done: join: all publish: is_done: true work: type: direct tasks: do: action: std.echo output="Doing..." on-success: - exit exit: action: std.echo output="Exiting..." """ wf_service.create_workflows(wfs_tasks_join_complex) # Start workflow. wf_ex = self.engine.start_workflow('main') self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual( { 'var1': True, 'is_done': True, 'var2': True }, wf_ex.output ) @testtools.skip('https://bugs.launchpad.net/mistral/+bug/1424461') def test_full_join_parallel_published_vars_complex(self): wf_text = """--- version: "2.0" main: type: direct output: var_a: <% $.var_a %> var_b: <% $.var_b %> var_c: <% $.var_c %> var_d: <% $.var_d %> tasks: init: publish: var_a: 0 var_b: 0 var_c: 0 on-success: - branch1_0 - branch2_0 branch1_0: publish: var_c: 1 on-success: - branch1_1 branch2_0: publish: var_a: 1 on-success: - done branch1_1: publish: var_b: 1 on-success: - done done: join: all publish: var_d: 1 """ wf_service.create_workflows(wf_text) # Start workflow. wf_ex = self.engine.start_workflow('main') self.await_workflow_success(wf_ex.id) # Note: We need to reread execution to access related tasks. with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertDictEqual( { 'var_a': 1, 'var_b': 1, 'var_c': 1, 'var_d': 1 }, wf_ex.output ) def test_full_join_with_branch_errors(self): wf_text = """--- version: '2.0' main: type: direct tasks: task10: action: std.noop on-success: - task21 - task31 task21: action: std.noop on-success: - task22 task22: action: std.noop on-success: - task40 task31: action: std.fail on-success: - task32 task32: action: std.noop on-success: - task40 task40: join: all action: std.noop """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('main') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self.assertIsNotNone(wf_ex.state_info) task10 = self._assert_single_item(tasks, name='task10') task21 = self._assert_single_item(tasks, name='task21') task22 = self._assert_single_item(tasks, name='task22') task31 = self._assert_single_item(tasks, name='task31') task40 = self._assert_single_item(tasks, name='task40') self.assertEqual(states.SUCCESS, task10.state) self.assertEqual(states.SUCCESS, task21.state) self.assertEqual(states.SUCCESS, task22.state) self.assertEqual(states.ERROR, task31.state) self.assertNotIn('task32', [task.name for task in tasks]) self.assertEqual(states.ERROR, task40.state) def test_diamond_join_all(self): wf_text = """--- version: '2.0' test-join: tasks: a: on-success: - b - c - d b: on-success: - e c: on-success: - e d: on-success: - e e: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('test-join') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self._assert_multiple_items(tasks, 5, state=states.SUCCESS) def test_join_multiple_routes_with_one_source(self): wf_text = """--- version: '2.0' wf: tasks: a: on-success: - b - c b: on-success: - c c: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions self._assert_multiple_items(tasks, 3, state=states.SUCCESS) def test_join_after_join(self): wf_text = """--- version: '2.0' wf: tasks: a: on-success: - c b: on-success: - c c: join: all on-success: - f d: on-success: - f e: on-success: - f f: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(6, len(task_execs)) self._assert_multiple_items(task_execs, 6, state=states.SUCCESS) def test_join_route_delays(self): wf_text = """--- version: '2.0' wf: tasks: a: wait-before: 4 on-success: b b: on-success: join c: on-success: join join: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(4, len(task_execs)) self._assert_multiple_items(task_execs, 4, state=states.SUCCESS) def test_delete_join_completion_check_on_stop(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop on-success: join_task task2: description: Never ends action: std.async_noop on-success: join_task join_task: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') tasks = db_api.get_task_executions(workflow_execution_id=wf_ex.id) self.assertGreaterEqual(len(tasks), 2) task1 = self._assert_single_item(tasks, name='task1') self.await_task_success(task1.id) # Once task1 is finished we know that join_task must be created. tasks = db_api.get_task_executions(workflow_execution_id=wf_ex.id) self._assert_single_item( tasks, name='join_task', state=states.WAITING ) calls = db_api.get_delayed_calls() mtd_name = 'mistral.engine.task_handler._refresh_task_state' cnt = sum([1 for c in calls if c.target_method_name == mtd_name]) # There can be 2 calls with different value of 'processing' flag. self.assertTrue(cnt == 1 or cnt == 2) # Stop the workflow. self.engine.stop_workflow(wf_ex.id, state=states.CANCELLED) self._await( lambda: len(db_api.get_delayed_calls(target_method_name=mtd_name)) == 0 ) def test_delete_join_completion_check_on_execution_delete(self): wf_text = """--- version: '2.0' wf: tasks: task1: action: std.noop on-success: join_task task2: description: Never ends action: std.async_noop on-success: join_task join_task: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') tasks = db_api.get_task_executions(workflow_execution_id=wf_ex.id) self.assertGreaterEqual(len(tasks), 2) task1 = self._assert_single_item(tasks, name='task1') self.await_task_success(task1.id) # Once task1 is finished we know that join_task must be created. tasks = db_api.get_task_executions(workflow_execution_id=wf_ex.id) self._assert_single_item( tasks, name='join_task', state=states.WAITING ) calls = db_api.get_delayed_calls() mtd_name = 'mistral.engine.task_handler._refresh_task_state' cnt = sum([1 for c in calls if c.target_method_name == mtd_name]) # There can be 2 calls with different value of 'processing' flag. self.assertTrue(cnt == 1 or cnt == 2) # Stop the workflow. db_api.delete_workflow_execution(wf_ex.id) self._await( lambda: len(db_api.get_delayed_calls(target_method_name=mtd_name)) == 0 ) def test_join_with_deep_dependencies_tree(self): wf_text = """--- version: '2.0' wf: tasks: task_a_1: on-success: - task_with_join task_b_1: action: std.fail on-success: - task_b_2 task_b_2: on-success: - task_b_3 task_b_3: on-success: - task_with_join task_with_join: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(3, len(task_execs)) self._assert_single_item( task_execs, name='task_a_1', state=states.SUCCESS ) self._assert_single_item( task_execs, name='task_b_1', state=states.ERROR ) self._assert_single_item( task_execs, name='task_with_join', state=states.ERROR ) def test_no_workflow_error_after_inbound_error(self): wf_text = """--- version: "2.0" wf: output: continue_flag: <% $.get(continue_flag) %> task-defaults: on-error: - change_continue_flag tasks: task_a: action: std.fail on-success: - task_c: <% $.get(continue_flag) = null %> - task_a_process task_a_process: action: std.noop task_b: on-success: - task_c: <% $.get(continue_flag) = null %> task_c: join: all change_continue_flag: publish: continue_flag: false """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) def test_triggered_by_success(self): wf_text = """--- version: '2.0' wf: type: direct tasks: join_task: join: all task1: on-success: join_task task2: on-success: join_task """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) t_execs = wf_ex.task_executions task1 = self._assert_single_item(t_execs, name='task1') task2 = self._assert_single_item(t_execs, name='task2') join_task = self._assert_single_item(t_execs, name='join_task') key = 'triggered_by' self.assertIsNone(task1.runtime_context.get(key)) self.assertIsNone(task2.runtime_context.get(key)) self.assertIn( { "task_id": task1.id, "event": "on-success" }, join_task.runtime_context.get(key) ) self.assertIn( { "task_id": task2.id, "event": "on-success" }, join_task.runtime_context.get(key) ) def test_triggered_by_error(self): wf_text = """--- version: '2.0' wf: type: direct tasks: task1: on-success: join_task task2: action: std.fail on-success: join_task task3: action: std.noop on-error: join_task join_task: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) t_execs = wf_ex.task_executions task1 = self._assert_single_item( t_execs, name='task1', state=states.SUCCESS ) task2 = self._assert_single_item( t_execs, name='task2', state=states.ERROR ) task3 = self._assert_single_item( t_execs, name='task3', state=states.SUCCESS ) join_task = self._assert_single_item( t_execs, name='join_task', state=states.ERROR ) key = 'triggered_by' self.assertIsNone(task1.runtime_context.get(key)) self.assertIsNone(task2.runtime_context.get(key)) self.assertIsNone(task3.runtime_context.get(key)) self.assertIn( { "task_id": task2.id, "event": "not triggered" }, join_task.runtime_context.get(key) ) self.assertIn( { "task_id": task3.id, "event": "not triggered" }, join_task.runtime_context.get(key) ) def test_triggered_by_impossible_route(self): wf_text = """--- version: '2.0' wf: type: direct tasks: task1: on-success: join_task task2: action: std.fail on-success: task3 task3: action: std.noop on-success: join_task join_task: join: all """ wf_service.create_workflows(wf_text) wf_ex = self.engine.start_workflow('wf') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) t_execs = wf_ex.task_executions task1 = self._assert_single_item( t_execs, name='task1', state=states.SUCCESS ) task2 = self._assert_single_item( t_execs, name='task2', state=states.ERROR ) join_task = self._assert_single_item( t_execs, name='join_task', state=states.ERROR ) self.assertEqual(3, len(t_execs)) key = 'triggered_by' self.assertIsNone(task1.runtime_context.get(key)) self.assertIsNone(task2.runtime_context.get(key)) # Note: in case if execution does not exist for a previous # task we can't track it in "triggered_by" because we need # to know its ID so we leave it blank. self.assertFalse(join_task.runtime_context.get(key)) def test_join_saving_task_context_with_all(self): workflow = """--- version: '2.0' test_workflow: type: direct tasks: task1: action: std.echo output='task1' on-success: - task2 publish: result: <% task().result %> task2: action: std.echo output='task2' join: all publish: result: <% task().result %> """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('test_workflow') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) tasks = wf_ex.task_executions for task in tasks: task_result = task.published["result"] self.assertEqual(task.name, task_result, "The result of task must equal own name") mistral-6.0.0/mistral/tests/unit/engine/test_workflow_cancel.py0000666000175100017510000004463113245513262025062 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states class WorkflowCancelTest(base.EngineTestCase): def test_cancel_workflow(self): workflow = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 3 """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.engine.stop_workflow( wf_ex.id, states.CANCELLED, "Cancelled by user." ) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') self.await_task_success(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled by user.", wf_ex.state_info) self.assertEqual(1, len(task_execs)) self.assertEqual(states.SUCCESS, task_1_ex.state) def test_cancel_workflow_if_definition_deleted(self): workflow = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="foo" wait-before: 5 """ wf = wf_service.create_workflows(workflow)[0] wf_ex = self.engine.start_workflow('wf') with db_api.transaction(): db_api.delete_workflow_definition(wf.id) self.engine.stop_workflow( wf_ex.id, states.CANCELLED, "Cancelled by user." ) self.await_workflow_cancelled(wf_ex.id) def test_cancel_paused_workflow(self): workflow = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 3 """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.engine.pause_workflow(wf_ex.id) self.await_workflow_paused(wf_ex.id) self.engine.stop_workflow( wf_ex.id, states.CANCELLED, "Cancelled by user." ) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') self.await_task_success(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item( task_execs, name='task1' ) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled by user.", wf_ex.state_info) self.assertEqual(1, len(task_execs)) self.assertEqual(states.SUCCESS, task_1_ex.state) def test_cancel_completed_workflow(self): workflow = """ version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Echo" """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.await_workflow_success(wf_ex.id) self.engine.stop_workflow( wf_ex.id, states.CANCELLED, "Cancelled by user." ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(1, len(task_execs)) self.assertEqual(states.SUCCESS, task_1_ex.state) def test_cancel_parent_workflow(self): workbook = """ version: '2.0' name: wb workflows: wf: type: direct tasks: taskx: workflow: subwf subwf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 2 """ wb_service.create_workbook_v2(workbook) wf_ex = self.engine.start_workflow('wb.wf') self.engine.stop_workflow( wf_ex.id, states.CANCELLED, "Cancelled by user." ) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='taskx') self.await_task_cancelled(task_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='taskx') subwf_execs = db_api.get_workflow_executions( task_execution_id=task_ex.id ) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled by user.", wf_ex.state_info) self.assertEqual(states.CANCELLED, task_ex.state) self.assertEqual("Cancelled by user.", task_ex.state_info) self.assertEqual(1, len(subwf_execs)) self.assertEqual(states.CANCELLED, subwf_execs[0].state) self.assertEqual("Cancelled by user.", subwf_execs[0].state_info) def test_cancel_child_workflow(self): workbook = """ version: '2.0' name: wb workflows: wf: type: direct tasks: taskx: workflow: subwf subwf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 3 """ wb_service.create_workbook_v2(workbook) self.engine.start_workflow('wb.wf') with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_ex = self._assert_single_item(wf_execs, name='wb.subwf') self.engine.stop_workflow( subwf_ex.id, states.CANCELLED, "Cancelled by user." ) self.await_workflow_cancelled(subwf_ex.id) self.await_task_cancelled(task_ex.id) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_ex = self._assert_single_item(wf_execs, name='wb.subwf') self.assertEqual(states.CANCELLED, subwf_ex.state) self.assertEqual("Cancelled by user.", subwf_ex.state_info) self.assertEqual(states.CANCELLED, task_ex.state) self.assertIn("Cancelled by user.", task_ex.state_info) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled tasks: taskx", wf_ex.state_info) def test_cancel_with_items_parent_workflow(self): workbook = """ version: '2.0' name: wb workflows: wf: type: direct tasks: taskx: with-items: i in [1, 2] workflow: subwf subwf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 1 """ wb_service.create_workbook_v2(workbook) wf_ex = self.engine.start_workflow('wb.wf') self.engine.stop_workflow( wf_ex.id, states.CANCELLED, "Cancelled by user." ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_ex = self._assert_single_item(task_execs, name='taskx') self.await_workflow_cancelled(wf_ex.id) self.await_task_cancelled(task_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_exs = self._assert_multiple_items( wf_execs, 2, name='wb.subwf' ) self.assertEqual(states.CANCELLED, subwf_exs[0].state) self.assertEqual("Cancelled by user.", subwf_exs[0].state_info) self.assertEqual(states.CANCELLED, subwf_exs[1].state) self.assertEqual("Cancelled by user.", subwf_exs[1].state_info) self.assertEqual(states.CANCELLED, task_ex.state) self.assertIn("cancelled", task_ex.state_info) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled by user.", wf_ex.state_info) def test_cancel_with_items_child_workflow(self): workbook = """ version: '2.0' name: wb workflows: wf: type: direct tasks: taskx: with-items: i in [1, 2] workflow: subwf subwf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 1 """ wb_service.create_workbook_v2(workbook) self.engine.start_workflow('wb.wf') with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_exs = self._assert_multiple_items( wf_execs, 2, name='wb.subwf' ) self.engine.stop_workflow( subwf_exs[0].id, states.CANCELLED, "Cancelled by user." ) self.await_workflow_cancelled(subwf_exs[0].id) self.await_workflow_success(subwf_exs[1].id) self.await_task_cancelled(task_ex.id) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_exs = self._assert_multiple_items( wf_execs, 2, name='wb.subwf' ) self.assertEqual(states.CANCELLED, subwf_exs[0].state) self.assertEqual("Cancelled by user.", subwf_exs[0].state_info) self.assertEqual(states.SUCCESS, subwf_exs[1].state) self.assertIsNone(subwf_exs[1].state_info) self.assertEqual(states.CANCELLED, task_ex.state) self.assertIn("cancelled", task_ex.state_info) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled tasks: taskx", wf_ex.state_info) def test_cancel_then_fail_with_items_child_workflow(self): workbook = """ version: '2.0' name: wb workflows: wf: type: direct tasks: taskx: with-items: i in [1, 2] workflow: subwf subwf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 1 """ wb_service.create_workbook_v2(workbook) self.engine.start_workflow('wb.wf') with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_exs = self._assert_multiple_items( wf_execs, 2, name='wb.subwf' ) self.engine.stop_workflow( subwf_exs[0].id, states.CANCELLED, "Cancelled by user." ) self.engine.stop_workflow( subwf_exs[1].id, states.ERROR, "Failed by user." ) self.await_workflow_cancelled(subwf_exs[0].id) self.await_workflow_error(subwf_exs[1].id) self.await_task_cancelled(task_ex.id) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_exs = self._assert_multiple_items( wf_execs, 2, name='wb.subwf' ) self.assertEqual(states.CANCELLED, subwf_exs[0].state) self.assertEqual("Cancelled by user.", subwf_exs[0].state_info) self.assertEqual(states.ERROR, subwf_exs[1].state) self.assertEqual("Failed by user.", subwf_exs[1].state_info) self.assertEqual(states.CANCELLED, task_ex.state) self.assertIn("cancelled", task_ex.state_info) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled tasks: taskx", wf_ex.state_info) def test_fail_then_cancel_with_items_child_workflow(self): workbook = """ version: '2.0' name: wb workflows: wf: type: direct tasks: taskx: with-items: i in [1, 2] workflow: subwf subwf: type: direct tasks: task1: action: std.echo output="Echo" on-complete: - task2 task2: action: std.echo output="foo" wait-before: 1 """ wb_service.create_workbook_v2(workbook) self.engine.start_workflow('wb.wf') with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_exs = self._assert_multiple_items( wf_execs, 2, name='wb.subwf' ) self.engine.stop_workflow( subwf_exs[1].id, states.ERROR, "Failed by user." ) self.engine.stop_workflow( subwf_exs[0].id, states.CANCELLED, "Cancelled by user." ) self.await_workflow_cancelled(subwf_exs[0].id) self.await_workflow_error(subwf_exs[1].id) self.await_task_cancelled(task_ex.id) self.await_workflow_cancelled(wf_ex.id) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wb.wf') task_ex = self._assert_single_item( wf_ex.task_executions, name='taskx' ) subwf_exs = self._assert_multiple_items( wf_execs, 2, name='wb.subwf' ) self.assertEqual(states.CANCELLED, subwf_exs[0].state) self.assertEqual("Cancelled by user.", subwf_exs[0].state_info) self.assertEqual(states.ERROR, subwf_exs[1].state) self.assertEqual("Failed by user.", subwf_exs[1].state_info) self.assertEqual(states.CANCELLED, task_ex.state) self.assertIn("cancelled", task_ex.state_info) self.assertEqual(states.CANCELLED, wf_ex.state) self.assertEqual("Cancelled tasks: taskx", wf_ex.state_info) mistral-6.0.0/mistral/tests/unit/engine/test_reverse_workflow_rerun_cancelled.py0000666000175100017510000001512013245513262030504 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as ml_actions # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') class ReverseWorkflowRerunCancelledTest(base.EngineTestCase): @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 2', # Mock task2 success. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_cancelled_task(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: type: reverse tasks: t1: action: std.async_noop t2: action: std.echo output="Task 2" requires: - t1 t3: action: std.echo output="Task 3" requires: - t2 """ wb_service.create_workbook_v2(wb_def) wf1_ex = self.engine.start_workflow('wb1.wf1', task_name='t3') self.await_workflow_state(wf1_ex.id, states.RUNNING) with db_api.transaction(): wf1_execs = db_api.get_workflow_executions() wf1_ex = self._assert_single_item(wf1_execs, name='wb1.wf1') wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(1, len(wf1_t1_action_exs)) self.assertEqual(states.RUNNING, wf1_t1_action_exs[0].state) # Cancel action execution for task. self.engine.on_action_complete( wf1_t1_action_exs[0].id, ml_actions.Result(cancel=True) ) self.await_workflow_cancelled(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) self.await_task_cancelled(wf1_t1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_t1_ex = self._assert_single_item( wf1_ex.task_executions, name='t1' ) self.assertEqual(states.CANCELLED, wf1_ex.state) self.assertEqual("Cancelled tasks: t1", wf1_ex.state_info) self.assertEqual(1, len(wf1_ex.task_executions)) self.assertEqual(states.CANCELLED, wf1_t1_ex.state) self.assertIsNone(wf1_t1_ex.state_info) # Resume workflow and re-run cancelled task. self.engine.rerun_workflow(wf1_t1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_task_execs = wf1_ex.task_executions self.assertEqual(states.RUNNING, wf1_ex.state) self.assertIsNone(wf1_ex.state_info) # Mark async action execution complete. wf1_t1_ex = self._assert_single_item(wf1_task_execs, name='t1') wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(states.RUNNING, wf1_t1_ex.state) self.assertEqual(2, len(wf1_t1_action_exs)) # Check there is exactly 1 action in Running and 1 in Cancelled state. # Order doesn't matter. self._assert_single_item(wf1_t1_action_exs, state=states.CANCELLED) running_execution = self._assert_single_item( wf1_t1_action_exs, state=states.RUNNING ) self.engine.on_action_complete( running_execution.id, ml_actions.Result(data={'foo': 'bar'}) ) # Wait for the workflow to succeed. self.await_workflow_success(wf1_ex.id) with db_api.transaction(): wf1_ex = db_api.get_workflow_execution(wf1_ex.id) wf1_task_execs = wf1_ex.task_executions self.assertEqual(states.SUCCESS, wf1_ex.state) self.assertIsNone(wf1_ex.state_info) self.assertEqual(3, len(wf1_task_execs)) wf1_t1_ex = self._assert_single_item(wf1_task_execs, name='t1') wf1_t2_ex = self._assert_single_item(wf1_task_execs, name='t2') wf1_t3_ex = self._assert_single_item(wf1_task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, wf1_t1_ex.state) self.assertIsNone(wf1_t2_ex.state_info) wf1_t1_action_exs = db_api.get_action_executions( task_execution_id=wf1_t1_ex.id ) self.assertEqual(2, len(wf1_t1_action_exs)) # Check there is exactly 1 action in Success and 1 in Cancelled state. # Order doesn't matter. self._assert_single_item(wf1_t1_action_exs, state=states.SUCCESS) self._assert_single_item(wf1_t1_action_exs, state=states.CANCELLED) # Check action executions of task 2. self.assertEqual(states.SUCCESS, wf1_t2_ex.state) wf1_t2_action_exs = db_api.get_action_executions( task_execution_id=wf1_t2_ex.id ) self.assertEqual(1, len(wf1_t2_action_exs)) self.assertEqual(states.SUCCESS, wf1_t2_action_exs[0].state) # Check action executions of task 3. self.assertEqual(states.SUCCESS, wf1_t3_ex.state) wf1_t3_action_exs = db_api.get_action_executions( task_execution_id=wf1_t3_ex.id ) self.assertEqual(1, len(wf1_t3_action_exs)) self.assertEqual(states.SUCCESS, wf1_t3_action_exs[0].state) mistral-6.0.0/mistral/tests/unit/engine/test_tasks_function.py0000666000175100017510000003002113245513262024721 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK_WITH_EXPRESSIONS = """ --- version: '2.0' name: wb workflows: test_tasks_function: input: - wf1_wx_id - wf2_wx_id - wf3_wx_id - wf4_wx_id - wf5_wx_id tasks: main_task: action: std.noop publish: all_tasks_yaql: <% tasks() %> all_tasks_jinja: "{{ tasks() }}" wf1_tasks_yaql: <% tasks($.wf1_wx_id) %> wf1_tasks_jinja: "{{ tasks(_.wf1_wx_id) }}" wf1_recursive_tasks_yaql: <% tasks($.wf1_wx_id, true) %> wf1_recursive_tasks_jinja: "{{ tasks(_.wf1_wx_id, true) }}" wf1_recursive_error_tasks_yaql: <% tasks($.wf1_wx_id, true, ERROR) %> wf1_recursive_error_tasks_jinja: "{{ tasks(_.wf1_wx_id, True, 'ERROR') }}" wf1_not_recursive_error_tasks_yaql: <% tasks($.wf1_wx_id, false, ERROR) %> wf1_not_recursive_error_tasks_jinja: "{{ tasks(_.wf1_wx_id, False, 'ERROR') }}" wf1_recursive_success_flat_tasks_yaql: <% tasks($.wf1_wx_id, true, SUCCESS, true) %> wf1_recursive_success_flat_tasks_jinja: "{{ tasks(_.wf1_wx_id, True, 'SUCCESS', True) }}" wf2_recursive_tasks_yaql: <% tasks($.wf2_wx_id, true) %> wf2_recursive_tasks_jinja: "{{ tasks(_.wf2_wx_id, true) }}" wf3_recursive_error_tasks_yaql: <% tasks($.wf3_wx_id, true, ERROR) %> wf3_recursive_error_tasks_jinja: "{{ tasks(_.wf3_wx_id, True, 'ERROR') }}" wf3_recursive_error_flat_tasks_yaql: <% tasks($.wf3_wx_id, true, ERROR, true) %> wf3_recursive_error_flat_tasks_jinja: "{{ tasks(_.wf3_wx_id, True, 'ERROR', True) }}" wf4_recursive_error_flat_tasks_yaql: <% tasks($.wf4_wx_id, true, ERROR, true) %> wf4_recursive_error_flat_tasks_jinja: "{{ tasks(_.wf4_wx_id, True, 'ERROR', True) }}" wf5_recursive_error_flat_tasks_yaql: <% tasks($.wf5_wx_id, true, ERROR, true) %> wf5_recursive_error_flat_tasks_jinja: "{{ tasks(_.wf5_wx_id, True, 'ERROR', True) }}" wf1_top_lvl: tasks: top_lvl_wf1_task_1: workflow: wf1_second_lvl top_lvl_wf1_task_2: action: std.noop wf1_second_lvl: tasks: second_lvl_wf1_task_1: workflow: wf1_third_lvl_fail on-error: - second_lvl_wf1_task_2 second_lvl_wf1_task_2: action: std.noop second_lvl_wf1_task_3: action: std.noop wf1_third_lvl_fail: tasks: third_lvl_wf1_task_1: action: std.noop on-success: - third_lvl_wf1_task_2_fail third_lvl_wf1_task_2_fail: action: std.fail third_lvl_wf1_task_3: action: std.noop wf2_top_lvl: tasks: top_lvl_wf2_task_1: action: std.noop top_lvl_wf2_task_2: action: std.noop wf3_top_lvl: tasks: top_lvl_wf3_task_1_fail: workflow: wf3_second_lvl_fail top_lvl_wf3_task_2_fail: action: std.fail wf3_second_lvl_fail: tasks: second_lvl_wf3_task_1_fail: workflow: wf3_third_lvl_fail second_lvl_wf3_task_2: action: std.noop second_lvl_wf3_task_3: action: std.noop wf3_third_lvl_fail: tasks: third_lvl_wf3_task_1: action: std.noop on-success: - third_lvl_wf3_task_2 third_lvl_wf3_task_2: action: std.noop third_lvl_wf3_task_3_fail: action: std.fail wf4_top_lvl: tasks: top_lvl_wf4_task_1: workflow: wf4_second_lvl publish: raise_error: <% $.invalid_yaql_expression %> wf4_second_lvl: tasks: second_lvl_wf4_task_1: action: std.noop wf5_top_lvl: tasks: top_lvl_wf5_task_1: workflow: wf4_second_lvl input: raise_error: <% $.invalid_yaql_expression2 %> wf5_second_lvl: tasks: second_lvl_wf5_task_1: workflow: wf5_third_lvl wf5_third_lvl: tasks: third_lvl_wf5_task_1: action: std.noop """ class TasksFunctionTest(base.EngineTestCase): def _assert_published_tasks(self, task, published_key, expected_tasks_count=None, expected_tasks_names=None): published = task.published[published_key] self.assertIsNotNone( published, "there is a problem with publishing '{}'".format(published_key) ) published_names = [t['name'] for t in published] if expected_tasks_names: for e in expected_tasks_names: self.assertIn(e, published_names) if not expected_tasks_count: expected_tasks_count = len(expected_tasks_names) if expected_tasks_count: self.assertEqual(expected_tasks_count, len(published)) def test_tasks_function(self): wb_service.create_workbook_v2(WORKBOOK_WITH_EXPRESSIONS) # Start helping workflow executions. wf1_ex = self.engine.start_workflow('wb.wf1_top_lvl') wf2_ex = self.engine.start_workflow('wb.wf2_top_lvl') wf3_ex = self.engine.start_workflow('wb.wf3_top_lvl') wf4_ex = self.engine.start_workflow('wb.wf4_top_lvl') wf5_ex = self.engine.start_workflow('wb.wf5_top_lvl') self.await_workflow_success(wf1_ex.id) self.await_workflow_success(wf2_ex.id) self.await_workflow_error(wf3_ex.id) self.await_workflow_error(wf4_ex.id) self.await_workflow_error(wf5_ex.id) # Start test workflow execution wf_ex = self.engine.start_workflow( 'wb.test_tasks_function', wf_input={ 'wf1_wx_id': wf1_ex.id, 'wf2_wx_id': wf2_ex.id, 'wf3_wx_id': wf3_ex.id, 'wf4_wx_id': wf4_ex.id, 'wf5_wx_id': wf5_ex.id } ) self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(1, len(task_execs)) main_task = task_execs[0] self._assert_published_tasks(main_task, 'all_tasks_yaql', 22) self._assert_published_tasks(main_task, 'all_tasks_jinja', 22) self._assert_published_tasks( main_task, 'wf1_tasks_yaql', 2, ['top_lvl_wf1_task_1', 'top_lvl_wf1_task_2'] ) self._assert_published_tasks( main_task, 'wf1_tasks_jinja', 2, ['top_lvl_wf1_task_1', 'top_lvl_wf1_task_2'] ) self._assert_published_tasks( main_task, 'wf1_recursive_tasks_yaql', 8, [ 'top_lvl_wf1_task_1', 'top_lvl_wf1_task_2', 'second_lvl_wf1_task_3', 'second_lvl_wf1_task_1', 'second_lvl_wf1_task_2', 'third_lvl_wf1_task_3', 'third_lvl_wf1_task_1', 'third_lvl_wf1_task_2_fail' ] ) self._assert_published_tasks( main_task, 'wf1_recursive_tasks_jinja', 8, [ 'top_lvl_wf1_task_1', 'top_lvl_wf1_task_2', 'second_lvl_wf1_task_3', 'second_lvl_wf1_task_1', 'second_lvl_wf1_task_2', 'third_lvl_wf1_task_3', 'third_lvl_wf1_task_1', 'third_lvl_wf1_task_2_fail' ] ) self._assert_published_tasks( main_task, 'wf1_recursive_error_tasks_yaql', 2, ['second_lvl_wf1_task_1', 'third_lvl_wf1_task_2_fail'] ) self._assert_published_tasks( main_task, 'wf1_recursive_error_tasks_jinja', 2, ['second_lvl_wf1_task_1', 'third_lvl_wf1_task_2_fail'] ) self._assert_published_tasks( main_task, 'wf1_not_recursive_error_tasks_yaql', 0 ) self._assert_published_tasks( main_task, 'wf1_not_recursive_error_tasks_jinja', 0 ) self._assert_published_tasks( main_task, 'wf1_recursive_success_flat_tasks_yaql', 5, [ 'top_lvl_wf1_task_2', 'second_lvl_wf1_task_3', 'second_lvl_wf1_task_2', 'third_lvl_wf1_task_3', 'third_lvl_wf1_task_1' ] ) self._assert_published_tasks( main_task, 'wf1_recursive_success_flat_tasks_jinja', 5, [ 'top_lvl_wf1_task_2', 'second_lvl_wf1_task_3', 'second_lvl_wf1_task_2', 'third_lvl_wf1_task_3', 'third_lvl_wf1_task_1' ] ) self._assert_published_tasks( main_task, 'wf2_recursive_tasks_yaql', 2, ['top_lvl_wf2_task_2', 'top_lvl_wf2_task_1'] ) self._assert_published_tasks( main_task, 'wf2_recursive_tasks_jinja', 2, ['top_lvl_wf2_task_2', 'top_lvl_wf2_task_1'] ) self._assert_published_tasks( main_task, 'wf3_recursive_error_tasks_yaql', 4, [ 'top_lvl_wf3_task_1_fail', 'top_lvl_wf3_task_2_fail', 'second_lvl_wf3_task_1_fail', 'third_lvl_wf3_task_3_fail' ] ) self._assert_published_tasks( main_task, 'wf3_recursive_error_tasks_jinja', 4, [ 'top_lvl_wf3_task_1_fail', 'top_lvl_wf3_task_2_fail', 'second_lvl_wf3_task_1_fail', 'third_lvl_wf3_task_3_fail' ] ) self._assert_published_tasks( main_task, 'wf3_recursive_error_flat_tasks_yaql', 2, ['top_lvl_wf3_task_2_fail', 'third_lvl_wf3_task_3_fail'] ) self._assert_published_tasks( main_task, 'wf3_recursive_error_flat_tasks_jinja', 2, ['top_lvl_wf3_task_2_fail', 'third_lvl_wf3_task_3_fail'] ) self._assert_published_tasks( main_task, 'wf4_recursive_error_flat_tasks_yaql', 1, ['top_lvl_wf4_task_1'] ) self._assert_published_tasks( main_task, 'wf4_recursive_error_flat_tasks_jinja', 1, ['top_lvl_wf4_task_1'] ) self._assert_published_tasks( main_task, 'wf5_recursive_error_flat_tasks_yaql', 1, ['top_lvl_wf5_task_1'] ) self._assert_published_tasks( main_task, 'wf5_recursive_error_flat_tasks_jinja', 1, ['top_lvl_wf5_task_1'] ) mistral-6.0.0/mistral/tests/unit/engine/test_task_pause_resume.py0000666000175100017510000002473313245513262025423 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.services import workflows as wf_service from mistral.tests.unit.engine import base from mistral.workflow import states from mistral_lib import actions as ml_actions class TaskPauseResumeTest(base.EngineTestCase): def test_pause_resume_action_ex(self): workflow = """ version: '2.0' wf: tasks: task1: action: std.async_noop on-success: - task2 task2: action: std.noop """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.await_workflow_state(wf_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wf') task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(1, len(task_execs)) self.assertEqual(states.RUNNING, task_1_ex.state) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) # Pause the action execution of task 1. self.engine.on_action_update(task_1_action_exs[0].id, states.PAUSED) self.await_task_paused(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(1, len(task_execs)) self.assertEqual(states.PAUSED, task_1_ex.state) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.PAUSED, task_1_action_exs[0].state) # Resume the action execution of task 1. self.engine.on_action_update(task_1_action_exs[0].id, states.RUNNING) self.await_task_running(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(1, len(task_execs)) self.assertEqual(states.RUNNING, task_1_ex.state) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) # Complete action execution of task 1. self.engine.on_action_complete( task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) # Wait for the workflow execution to complete. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) task_2_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(2, len(task_execs)) self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, task_2_ex.state) def test_pause_resume_action_ex_with_items_task(self): workflow = """ version: '2.0' wf: tasks: task1: with-items: i in <% range(3) %> action: std.async_noop on-success: - task2 task2: action: std.noop """ wf_service.create_workflows(workflow) wf_ex = self.engine.start_workflow('wf') self.await_workflow_state(wf_ex.id, states.RUNNING) with db_api.transaction(): wf_execs = db_api.get_workflow_executions() wf_ex = self._assert_single_item(wf_execs, name='wf') task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = sorted( db_api.get_action_executions(task_execution_id=task_1_ex.id), key=lambda x: x['runtime_context']['index'] ) self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(1, len(task_execs)) self.assertEqual(states.RUNNING, task_1_ex.state) self.assertEqual(3, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) self.assertEqual(states.RUNNING, task_1_action_exs[1].state) self.assertEqual(states.RUNNING, task_1_action_exs[2].state) # Pause the 1st action execution of task 1. self.engine.on_action_update(task_1_action_exs[0].id, states.PAUSED) self.await_task_paused(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = sorted( db_api.get_action_executions(task_execution_id=task_1_ex.id), key=lambda x: x['runtime_context']['index'] ) self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(1, len(task_execs)) self.assertEqual(states.PAUSED, task_1_ex.state) self.assertEqual(3, len(task_1_action_exs)) self.assertEqual(states.PAUSED, task_1_action_exs[0].state) self.assertEqual(states.RUNNING, task_1_action_exs[1].state) self.assertEqual(states.RUNNING, task_1_action_exs[2].state) # Complete 2nd and 3rd action executions of task 1. self.engine.on_action_complete( task_1_action_exs[1].id, ml_actions.Result(data={'result': 'two'}) ) self.engine.on_action_complete( task_1_action_exs[2].id, ml_actions.Result(data={'result': 'three'}) ) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = sorted( db_api.get_action_executions(task_execution_id=task_1_ex.id), key=lambda x: x['runtime_context']['index'] ) self.assertEqual(states.PAUSED, wf_ex.state) self.assertEqual(1, len(task_execs)) self.assertEqual(states.PAUSED, task_1_ex.state) self.assertEqual(3, len(task_1_action_exs)) self.assertEqual(states.PAUSED, task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, task_1_action_exs[1].state) self.assertEqual(states.SUCCESS, task_1_action_exs[2].state) # Resume the 1st action execution of task 1. self.engine.on_action_update(task_1_action_exs[0].id, states.RUNNING) self.await_task_running(task_1_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_1_ex = self._assert_single_item( wf_ex.task_executions, name='task1' ) task_1_action_exs = sorted( db_api.get_action_executions(task_execution_id=task_1_ex.id), key=lambda x: x['runtime_context']['index'] ) self.assertEqual(states.RUNNING, wf_ex.state) self.assertEqual(1, len(task_execs)) self.assertEqual(states.RUNNING, task_1_ex.state) self.assertEqual(3, len(task_1_action_exs)) self.assertEqual(states.RUNNING, task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, task_1_action_exs[1].state) self.assertEqual(states.SUCCESS, task_1_action_exs[2].state) # Complete the 1st action execution of task 1. self.engine.on_action_complete( task_1_action_exs[0].id, ml_actions.Result(data={'result': 'foobar'}) ) # Wait for the workflow execution to complete. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions task_1_ex = self._assert_single_item(task_execs, name='task1') task_1_action_exs = sorted( db_api.get_action_executions(task_execution_id=task_1_ex.id), key=lambda x: x['runtime_context']['index'] ) task_2_ex = self._assert_single_item(task_execs, name='task2') self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(2, len(task_execs)) self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(3, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) self.assertEqual(states.SUCCESS, task_1_action_exs[1].state) self.assertEqual(states.SUCCESS, task_1_action_exs[2].state) self.assertEqual(states.SUCCESS, task_2_ex.state) mistral-6.0.0/mistral/tests/unit/engine/test_reverse_workflow_rerun.py0000666000175100017510000002652113245513262026521 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import workbooks as wb_service from mistral.tests.unit.engine import base from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') SIMPLE_WORKBOOK = """ --- version: '2.0' name: wb1 workflows: wf1: type: reverse tasks: t1: action: std.echo output="Task 1" t2: action: std.echo output="Task 2" requires: - t1 t3: action: std.echo output="Task 3" requires: - t2 """ SIMPLE_WORKBOOK_DIFF_ENV_VAR = """ --- version: '2.0' name: wb1 workflows: wf1: type: reverse tasks: t1: action: std.echo output="Task 1" t2: action: std.echo output=<% env().var1 %> requires: - t1 t3: action: std.echo output=<% env().var2 %> requires: - t2 """ class ReverseWorkflowRerunTest(base.EngineTestCase): @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. exc.ActionException(), # Mock task2 exception for initial run. 'Task 2', # Mock task2 success for rerun. 'Task 3' # Mock task3 success. ] ) ) def test_rerun(self): wb_service.create_workbook_v2(SIMPLE_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1', task_name='t3') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.ERROR, task_2_ex.state) self.assertIsNotNone(task_2_ex.state_info) # Resume workflow and re-run failed task. self.engine.rerun_workflow(task_2_ex.id) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) # Wait for the workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertIsNone(task_2_ex.state_info) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(2, len(task_2_action_exs)) # Check there is exactly 1 action in Success and 1 in Error state. # Order doesn't matter. self.assertEqual( 1, len([act_ex for act_ex in task_2_action_exs if act_ex.state == states.SUCCESS]) ) self.assertEqual( 1, len([act_ex for act_ex in task_2_action_exs if act_ex.state == states.ERROR]) ) # Check action executions of task 3. self.assertEqual(states.SUCCESS, task_3_ex.state) task_3_action_exs = db_api.get_action_executions( task_execution_id=task_3_ex.id ) self.assertEqual(1, len(task_3_action_exs)) self.assertEqual(states.SUCCESS, task_3_action_exs[0].state) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. exc.ActionException(), # Mock task2 exception for initial run. 'Task 2', # Mock task2 success for rerun. 'Task 3' # Mock task3 success. ] ) ) def test_rerun_diff_env_vars(self): wb_service.create_workbook_v2(SIMPLE_WORKBOOK_DIFF_ENV_VAR) # Initial environment variables for the workflow execution. env = { 'var1': 'fee fi fo fum', 'var2': 'foobar' } # Run workflow and fail task. wf_ex = self.engine.start_workflow( 'wb1.wf1', task_name='t3', env=env ) self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) self.assertDictEqual(env, wf_ex.params['env']) self.assertDictEqual(env, wf_ex.context['__env']) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.ERROR, task_2_ex.state) self.assertIsNotNone(task_2_ex.state_info) # Update env in workflow execution with the following. updated_env = { 'var1': 'Task 2', 'var2': 'Task 3' } # Resume workflow and re-run failed task. self.engine.rerun_workflow(task_2_ex.id, env=updated_env) wf_ex = db_api.get_workflow_execution(wf_ex.id) self.assertEqual(states.RUNNING, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertDictEqual(updated_env, wf_ex.params['env']) self.assertDictEqual(updated_env, wf_ex.context['__env']) # Wait for the workflow to succeed. self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') # Check action executions of task 1. self.assertEqual(states.SUCCESS, task_1_ex.state) task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(1, len(task_1_action_exs)) self.assertEqual(states.SUCCESS, task_1_action_exs[0].state) self.assertDictEqual( {'output': 'Task 1'}, task_1_action_exs[0].input ) # Check action executions of task 2. self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertIsNone(task_2_ex.state_info) task_2_action_exs = db_api.get_action_executions( task_execution_id=task_2_ex.id ) self.assertEqual(2, len(task_2_action_exs)) # Assert that one action ex is in error and one in success states. self.assertIn( task_2_action_exs[0].state, [states.ERROR, states.SUCCESS] ) self.assertIn( task_2_action_exs[1].state, [states.ERROR, states.SUCCESS] ) self.assertNotEqual( task_2_action_exs[0].state, task_2_action_exs[1].state ) # Assert that one action ex got first env and one got second env self.assertIn( task_2_action_exs[0].input['output'], [env['var1'], updated_env['var1']] ) self.assertIn( task_2_action_exs[1].input['output'], [env['var1'], updated_env['var1']] ) self.assertNotEqual( task_2_action_exs[0].input, task_2_action_exs[1].input ) # Check action executions of task 3. self.assertEqual(states.SUCCESS, task_3_ex.state) task_3_action_exs = db_api.get_action_executions( task_execution_id=task_3_ex.id ) self.assertEqual(1, len(task_3_action_exs)) self.assertEqual(states.SUCCESS, task_3_action_exs[0].state) self.assertDictEqual( {'output': updated_env['var2']}, task_3_action_exs[0].input ) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success for initial run. exc.ActionException() # Mock task2 exception for initial run. ] ) ) def test_rerun_from_prev_step(self): wb_service.create_workbook_v2(SIMPLE_WORKBOOK) # Run workflow and fail task. wf_ex = self.engine.start_workflow('wb1.wf1', task_name='t3') self.await_workflow_error(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.ERROR, wf_ex.state) self.assertIsNotNone(wf_ex.state_info) self.assertEqual(2, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.ERROR, task_2_ex.state) self.assertIsNotNone(task_2_ex.state_info) # Resume workflow and re-run failed task. e = self.assertRaises( exc.MistralError, self.engine.rerun_workflow, task_1_ex.id ) self.assertIn('not supported', str(e)) mistral-6.0.0/mistral/tests/unit/utils/0000775000175100017510000000000013245513604020153 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/utils/test_utils.py0000666000175100017510000001230413245513262022726 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2015 - Huawei Technologies Co. Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from mistral import exceptions as exc from mistral.tests.unit import base from mistral import utils from mistral.utils import ssh_utils LEFT = { 'key1': { 'key11': "val11" }, 'key2': 'val2' } RIGHT = { 'key1': { 'key11': "val111111", 'key12': "val12", 'key13': { 'key131': 'val131' } }, 'key2': 'val2222', 'key3': 'val3' } class UtilsTest(base.BaseTest): def test_merge_dicts(self): left = copy.deepcopy(LEFT) right = copy.deepcopy(RIGHT) expected = { 'key1': { 'key11': "val111111", 'key12': "val12", 'key13': { 'key131': 'val131' } }, 'key2': 'val2222', 'key3': 'val3' } utils.merge_dicts(left, right) self.assertDictEqual(left, expected) def test_merge_dicts_overwrite_false(self): left = copy.deepcopy(LEFT) right = copy.deepcopy(RIGHT) expected = { 'key1': { 'key11': "val11", 'key12': "val12", 'key13': { 'key131': 'val131' } }, 'key2': 'val2', 'key3': 'val3' } utils.merge_dicts(left, right, overwrite=False) self.assertDictEqual(left, expected) def test_itersubclasses(self): class A(object): pass class B(A): pass class C(A): pass class D(C): pass self.assertEqual([B, C, D], list(utils.iter_subclasses(A))) def test_get_dict_from_entries(self): input = ['param1', {'param2': 2}] input_dict = utils.get_dict_from_entries(input) self.assertIn('param1', input_dict) self.assertIn('param2', input_dict) self.assertEqual(2, input_dict.get('param2')) self.assertIs(input_dict.get('param1'), utils.NotDefined) def test_get_input_dict_from_string(self): self.assertDictEqual( { 'param1': utils.NotDefined, 'param2': 2, 'param3': 'var3' }, utils.get_dict_from_string('param1, param2=2, param3="var3"') ) self.assertDictEqual({}, utils.get_dict_from_string('')) def test_paramiko_to_private_key(self): self.assertRaises( exc.DataAccessException, ssh_utils._to_paramiko_private_key, "../dir" ) self.assertRaises( exc.DataAccessException, ssh_utils._to_paramiko_private_key, "..\\dir" ) def test_cut_string(self): s = 'Hello, Mistral!' self.assertEqual('Hello...', utils.cut_string(s, length=5)) self.assertEqual(s, utils.cut_string(s, length=100)) def test_cut_list(self): l = ['Hello, Mistral!', 'Hello, OpenStack!'] self.assertEqual("['Hello, M...", utils.cut_list(l, 11)) self.assertEqual("['Hello, Mistr...", utils.cut_list(l, 15)) self.assertEqual("['Hello, Mistral!', 'He...", utils.cut_list(l, 24)) self.assertEqual( "['Hello, Mistral!', 'Hello, OpenStack!']", utils.cut_list(l, 100) ) self.assertEqual("[1, 2...", utils.cut_list([1, 2, 3, 4, 5], 4)) self.assertEqual("[1, 2...", utils.cut_list([1, 2, 3, 4, 5], 5)) self.assertEqual("[1, 2, 3...", utils.cut_list([1, 2, 3, 4, 5], 6)) self.assertRaises(ValueError, utils.cut_list, (1, 2)) def test_cut_dict_with_strings(self): d = {'key1': 'value1', 'key2': 'value2'} s = utils.cut_dict(d, 9) self.assertIn(s, ["{'key1': '...", "{'key2': '..."]) s = utils.cut_dict(d, 13) self.assertIn(s, ["{'key1': 'va...", "{'key2': 'va..."]) s = utils.cut_dict(d, 19) self.assertIn( s, ["{'key1': 'value1', ...", "{'key2': 'value2', ..."] ) self.assertIn( utils.cut_dict(d, 100), [ "{'key1': 'value1', 'key2': 'value2'}", "{'key2': 'value2', 'key1': 'value1'}" ] ) def test_cut_dict_with_digits(self): d = {1: 2, 3: 4} s = utils.cut_dict(d, 6) self.assertIn(s, ["{1: 2, ...", "{3: 4, ..."]) s = utils.cut_dict(d, 8) self.assertIn(s, ["{1: 2, 3...", "{3: 4, 1..."]) s = utils.cut_dict(d, 100) self.assertIn(s, ["{1: 2, 3: 4}", "{3: 4, 1: 2}"]) mistral-6.0.0/mistral/tests/unit/utils/__init__.py0000666000175100017510000000000013245513262022254 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/utils/test_inspect_utils.py0000666000175100017510000000402413245513262024453 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import time from mistral.actions import std_actions from mistral.tests.unit import base from mistral.utils import inspect_utils as i_u from mistral.workflow import commands class ClassWithProperties(object): a = 1 @property def prop(self): pass class InspectUtilsTest(base.BaseTest): def test_get_parameters_str(self): action_class = std_actions.HTTPAction parameters_str = i_u.get_arg_list_as_str(action_class.__init__) http_action_params = ( 'url, method="GET", params=null, body=null, ' 'headers=null, cookies=null, auth=null, ' 'timeout=null, allow_redirects=null, ' 'proxies=null, verify=null' ) self.assertEqual(http_action_params, parameters_str) def test_get_parameters_str_all_mandatory(self): clazz = commands.RunTask parameters_str = i_u.get_arg_list_as_str(clazz.__init__) self.assertEqual( 'wf_ex, wf_spec, task_spec, ctx, triggered_by=null', parameters_str ) def test_get_parameters_str_with_function_parameter(self): def test_func(foo, bar=None, test_func=time.sleep): pass parameters_str = i_u.get_arg_list_as_str(test_func) self.assertEqual("foo, bar=null", parameters_str) def test_get_public_fields(self): attrs = i_u.get_public_fields(ClassWithProperties) self.assertEqual(attrs, {'a': 1}) mistral-6.0.0/mistral/tests/unit/utils/test_keystone_utils.py0000666000175100017510000000361213245513262024651 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from mistral import context as auth_context from mistral import exceptions from mistral.tests.unit import base from mistral.utils.openstack import keystone class KeystoneUtilsTest(base.BaseTest): def setUp(self): super(KeystoneUtilsTest, self).setUp() self.values = {'id': 'my_id'} def test_format_url_dollar_sign(self): url_template = "http://host:port/v1/$(id)s" expected = "http://host:port/v1/my_id" self.assertEqual( expected, keystone.format_url(url_template, self.values) ) def test_format_url_percent_sign(self): url_template = "http://host:port/v1/%(id)s" expected = "http://host:port/v1/my_id" self.assertEqual( expected, keystone.format_url(url_template, self.values) ) @mock.patch.object(keystone, 'client') def test_get_endpoint_for_project_noauth(self, client): client().tokens.get_token_data.return_value = {'token': None} # service_catalog is not set by default. auth_context.set_ctx(base.get_context()) self.addCleanup(auth_context.set_ctx, None) self.assertRaises( exceptions.UnauthorizedException, keystone.get_endpoint_for_project, 'keystone' ) mistral-6.0.0/mistral/tests/unit/utils/test_expression_utils.py0000666000175100017510000000217713245513262025214 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.tests.unit import base from mistral.utils import expression_utils as e_u JSON_INPUT = [ { "this": "is valid", }, { "so": "is this", "and": "this too", "might": "as well", }, "validaswell" ] JSON_TO_YAML_STR = """- this: is valid - and: this too might: as well so: is this - validaswell """ class ExpressionUtilsTest(base.BaseTest): def test_yaml_dump(self): yaml_str = e_u.yaml_dump_(None, JSON_INPUT) self.assertEqual(JSON_TO_YAML_STR, yaml_str) mistral-6.0.0/mistral/tests/unit/mstrlfixtures/0000775000175100017510000000000013245513604021746 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/mstrlfixtures/policy_fixtures.py0000666000175100017510000000317613245513262025561 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from mistral.api import access_control as acl from mistral import policies from oslo_config import cfg from oslo_policy import opts as policy_opts from oslo_policy import policy as oslo_policy class PolicyFixture(fixtures.Fixture): def setUp(self): super(PolicyFixture, self).setUp() policy_opts.set_defaults(cfg.CONF) acl._ENFORCER = oslo_policy.Enforcer(cfg.CONF) acl._ENFORCER.register_defaults(policies.list_rules()) acl._ENFORCER.load_rules() self.addCleanup(acl._ENFORCER.clear) def register_rules(self, rules): enf = acl._ENFORCER for rule_name, rule_check_str in rules.items(): enf.register_default(oslo_policy.RuleDefault(rule_name, rule_check_str)) def change_policy_definition(self, rules): enf = acl._ENFORCER for rule_name, rule_check_str in rules.items(): enf.rules[rule_name] = oslo_policy.RuleDefault( rule_name, rule_check_str).check mistral-6.0.0/mistral/tests/unit/mstrlfixtures/hacking.py0000666000175100017510000000253313245513262023731 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(morganfainberg) This file shouldn't have flake8 run on it as it has # code examples that will fail normal CI pep8/flake8 tests. This is expected. # The code has been moved here to ensure that proper tests occur on the # hacking/test_checks test cases. # flake8: noqa import fixtures class HackingLogging(fixtures.Fixture): shared_imports = """ import logging from oslo_log import log from oslo_log import log as logging """ assert_not_using_deprecated_warn = { 'code': """ # Logger.warn has been deprecated in Python3 in favor of # Logger.warning LOG = log.getLogger(__name__) LOG.warn('text') """, 'expected_errors': [ (8, 9, 'M001'), ], } mistral-6.0.0/mistral/tests/unit/mstrlfixtures/__init__.py0000666000175100017510000000000013245513262024047 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/workflow/0000775000175100017510000000000013245513604020665 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/workflow/test_workflow_base.py0000666000175100017510000000351613245513262025151 0ustar zuulzuul00000000000000# Copyright 2015 - Huawei Technologies Co. Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.lang import parser as spec_parser from mistral.tests.unit import base from mistral.workflow import base as wf_base from mistral.workflow import direct_workflow as direct_wf from mistral.workflow import reverse_workflow as reverse_wf from mistral.db.v2.sqlalchemy import models as db_models DIRECT_WF = """ --- version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Hey" """ REVERSE_WF = """ --- version: '2.0' wf: type: reverse tasks: task1: action: std.echo output="Hey" """ class WorkflowControllerTest(base.BaseTest): def test_get_controller_direct(self): wf_spec = spec_parser.get_workflow_list_spec_from_yaml(DIRECT_WF)[0] wf_ex = db_models.WorkflowExecution(spec=wf_spec.to_dict()) self.assertIsInstance( wf_base.get_controller(wf_ex, wf_spec), direct_wf.DirectWorkflowController ) def test_get_controller_reverse(self): wf_spec = spec_parser.get_workflow_list_spec_from_yaml(REVERSE_WF)[0] wf_ex = db_models.WorkflowExecution(spec=wf_spec.to_dict()) self.assertIsInstance( wf_base.get_controller(wf_ex, wf_spec), reverse_wf.ReverseWorkflowController ) mistral-6.0.0/mistral/tests/unit/workflow/test_states.py0000666000175100017510000000747113245513262023614 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.tests.unit import base from mistral.workflow import states as s class StatesModuleTest(base.BaseTest): def test_is_valid_transition(self): # From IDLE self.assertTrue(s.is_valid_transition(s.IDLE, s.IDLE)) self.assertTrue(s.is_valid_transition(s.IDLE, s.RUNNING)) self.assertTrue(s.is_valid_transition(s.IDLE, s.ERROR)) self.assertFalse(s.is_valid_transition(s.IDLE, s.PAUSED)) self.assertFalse(s.is_valid_transition(s.IDLE, s.RUNNING_DELAYED)) self.assertFalse(s.is_valid_transition(s.IDLE, s.SUCCESS)) # From RUNNING self.assertTrue(s.is_valid_transition(s.RUNNING, s.RUNNING)) self.assertTrue(s.is_valid_transition(s.RUNNING, s.ERROR)) self.assertTrue(s.is_valid_transition(s.RUNNING, s.PAUSED)) self.assertTrue(s.is_valid_transition(s.RUNNING, s.RUNNING_DELAYED)) self.assertTrue(s.is_valid_transition(s.RUNNING, s.SUCCESS)) self.assertFalse(s.is_valid_transition(s.RUNNING, s.IDLE)) # From PAUSED self.assertTrue(s.is_valid_transition(s.PAUSED, s.PAUSED)) self.assertTrue(s.is_valid_transition(s.PAUSED, s.RUNNING)) self.assertTrue(s.is_valid_transition(s.PAUSED, s.ERROR)) self.assertFalse(s.is_valid_transition(s.PAUSED, s.RUNNING_DELAYED)) self.assertFalse(s.is_valid_transition(s.PAUSED, s.SUCCESS)) self.assertFalse(s.is_valid_transition(s.PAUSED, s.IDLE)) # From DELAYED self.assertTrue( s.is_valid_transition(s.RUNNING_DELAYED, s.RUNNING_DELAYED) ) self.assertTrue(s.is_valid_transition(s.RUNNING_DELAYED, s.RUNNING)) self.assertTrue(s.is_valid_transition(s.RUNNING_DELAYED, s.ERROR)) self.assertFalse(s.is_valid_transition(s.RUNNING_DELAYED, s.PAUSED)) self.assertFalse(s.is_valid_transition(s.RUNNING_DELAYED, s.SUCCESS)) self.assertFalse(s.is_valid_transition(s.RUNNING_DELAYED, s.IDLE)) # From SUCCESS self.assertTrue(s.is_valid_transition(s.SUCCESS, s.SUCCESS)) self.assertFalse(s.is_valid_transition(s.SUCCESS, s.RUNNING)) self.assertFalse(s.is_valid_transition(s.SUCCESS, s.ERROR)) self.assertFalse(s.is_valid_transition(s.SUCCESS, s.PAUSED)) self.assertFalse(s.is_valid_transition(s.SUCCESS, s.RUNNING_DELAYED)) self.assertFalse(s.is_valid_transition(s.SUCCESS, s.IDLE)) # From ERROR self.assertTrue(s.is_valid_transition(s.ERROR, s.ERROR)) self.assertTrue(s.is_valid_transition(s.ERROR, s.RUNNING)) self.assertFalse(s.is_valid_transition(s.ERROR, s.PAUSED)) self.assertFalse(s.is_valid_transition(s.ERROR, s.RUNNING_DELAYED)) self.assertFalse(s.is_valid_transition(s.ERROR, s.SUCCESS)) self.assertFalse(s.is_valid_transition(s.ERROR, s.IDLE)) # From WAITING self.assertTrue(s.is_valid_transition(s.WAITING, s.RUNNING)) self.assertFalse(s.is_valid_transition(s.WAITING, s.SUCCESS)) self.assertFalse(s.is_valid_transition(s.WAITING, s.PAUSED)) self.assertFalse(s.is_valid_transition(s.WAITING, s.RUNNING_DELAYED)) self.assertFalse(s.is_valid_transition(s.WAITING, s.IDLE)) self.assertFalse(s.is_valid_transition(s.WAITING, s.ERROR)) mistral-6.0.0/mistral/tests/unit/workflow/test_reverse_workflow.py0000666000175100017510000001103313245513262025703 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import workbooks as wb_service from mistral.tests.unit import base from mistral.workflow import reverse_workflow as reverse_wf from mistral.workflow import states # TODO(rakhmerov): This workflow is too simple. Add more complicated one. WB = """ --- version: '2.0' name: my_wb workflows: wf: type: reverse tasks: task1: action: std.echo output="Hey" task2: action: std.echo output="Hi!" requires: [task1] """ class ReverseWorkflowControllerTest(base.DbTestCase): def setUp(self): super(ReverseWorkflowControllerTest, self).setUp() wb_service.create_workbook_v2(WB) self.wb_spec = spec_parser.get_workbook_spec_from_yaml(WB) def _create_workflow_execution(self, params): wf_def = db_api.get_workflow_definitions()[0] self.wf_ex = db_api.create_workflow_execution({ 'id': '1-2-3-4', 'spec': self.wb_spec.get_workflows().get('wf').to_dict(), 'state': states.RUNNING, 'params': params, 'workflow_id': wf_def.id }) def _create_task_execution(self, name, state): tasks_spec = self.wb_spec.get_workflows()['wf'].get_tasks() return db_api.create_task_execution({ 'name': name, 'spec': tasks_spec[name].to_dict(), 'state': state, 'workflow_execution_id': self.wf_ex.id }) def test_start_workflow_task2(self): with db_api.transaction(): self._create_workflow_execution({'task_name': 'task2'}) wf_ctrl = reverse_wf.ReverseWorkflowController(self.wf_ex) cmds = wf_ctrl.continue_workflow() self.assertEqual(1, len(cmds)) self.assertEqual('task1', cmds[0].task_spec.get_name()) def test_start_workflow_task1(self): with db_api.transaction(): self._create_workflow_execution({'task_name': 'task1'}) wf_ctrl = reverse_wf.ReverseWorkflowController(self.wf_ex) cmds = wf_ctrl.continue_workflow() self.assertEqual(1, len(cmds)) self.assertEqual('task1', cmds[0].task_spec.get_name()) def test_start_workflow_without_task(self): with db_api.transaction(): self._create_workflow_execution({}) wf_ctrl = reverse_wf.ReverseWorkflowController(self.wf_ex) self.assertRaises(exc.WorkflowException, wf_ctrl.continue_workflow) def test_continue_workflow(self): with db_api.transaction(): self._create_workflow_execution({'task_name': 'task2'}) wf_ctrl = reverse_wf.ReverseWorkflowController(self.wf_ex) # Assume task1 completed. task1_ex = self._create_task_execution('task1', states.SUCCESS) task1_ex.executions.append( models.ActionExecution( name='std.echo', workflow_name='wf', state=states.SUCCESS, output={'result': 'Hey'}, accepted=True ) ) cmds = wf_ctrl.continue_workflow() task1_ex.processed = True self.assertEqual(1, len(cmds)) self.assertEqual('task2', cmds[0].task_spec.get_name()) # Now assume task2 completed. task2_ex = self._create_task_execution('task2', states.SUCCESS) task2_ex.executions.append( models.ActionExecution( name='std.echo', workflow_name='wf', state=states.SUCCESS, output={'result': 'Hi!'}, accepted=True ) ) cmds = wf_ctrl.continue_workflow() task1_ex.processed = True self.assertEqual(0, len(cmds)) mistral-6.0.0/mistral/tests/unit/workflow/__init__.py0000666000175100017510000000000013245513262022766 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/workflow/test_direct_workflow.py0000666000175100017510000001165013245513262025507 0ustar zuulzuul00000000000000# Copyright 2015 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from mistral.db.v2 import api as db_api from mistral.db.v2.sqlalchemy import models from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.services import workflows as wf_service from mistral.tests.unit import base from mistral.workflow import direct_workflow as d_wf from mistral.workflow import states class DirectWorkflowControllerTest(base.DbTestCase): def _prepare_test(self, wf_text): wfs = wf_service.create_workflows(wf_text) wf_spec = spec_parser.get_workflow_spec_by_definition_id( wfs[0].id, wfs[0].updated_at ) wf_ex = models.WorkflowExecution( id='1-2-3-4', spec=wf_spec.to_dict(), state=states.RUNNING, workflow_id=wfs[0].id, input={}, context={} ) self.wf_ex = wf_ex self.wf_spec = wf_spec return wf_ex def _create_task_execution(self, name, state): tasks_spec = self.wf_spec.get_tasks() task_ex = models.TaskExecution( id=self.getUniqueString('id'), name=name, spec=tasks_spec[name].to_dict(), state=state ) self.wf_ex.task_executions.append(task_ex) return task_ex @mock.patch.object(db_api, 'get_workflow_execution') @mock.patch.object(db_api, 'get_task_execution') def test_continue_workflow(self, get_task_execution, get_workflow_execution): wf_text = """--- version: '2.0' wf: type: direct tasks: task1: action: std.echo output="Hey" publish: res1: <% $.task1 %> on-complete: - task2: <% $.res1 = 'Hey' %> - task3: <% $.res1 = 'Not Hey' %> task2: action: std.echo output="Hi" task3: action: std.echo output="Hoy" """ wf_ex = self._prepare_test(wf_text) get_workflow_execution.return_value = wf_ex wf_ctrl = d_wf.DirectWorkflowController(wf_ex) # Workflow execution is in initial step. No running tasks. cmds = wf_ctrl.continue_workflow() self.assertEqual(1, len(cmds)) cmd = cmds[0] self.assertIs(wf_ctrl.wf_ex, cmd.wf_ex) self.assertIsNotNone(cmd.task_spec) self.assertEqual('task1', cmd.task_spec.get_name()) self.assertEqual(states.RUNNING, self.wf_ex.state) # Assume that 'task1' completed successfully. task1_ex = self._create_task_execution('task1', states.SUCCESS) task1_ex.published = {'res1': 'Hey'} get_task_execution.return_value = task1_ex task1_ex.action_executions.append( models.ActionExecution( name='std.echo', workflow_name='wf', state=states.SUCCESS, output={'result': 'Hey'}, accepted=True, runtime_context={'index': 0} ) ) cmds = wf_ctrl.continue_workflow() task1_ex.processed = True self.assertEqual(1, len(cmds)) self.assertEqual('task2', cmds[0].task_spec.get_name()) self.assertEqual(states.RUNNING, self.wf_ex.state) self.assertEqual(states.SUCCESS, task1_ex.state) # Now assume that 'task2' completed successfully. task2_ex = self._create_task_execution('task2', states.SUCCESS) task2_ex.action_executions.append( models.ActionExecution( name='std.echo', workflow_name='wf', state=states.SUCCESS, output={'result': 'Hi'}, accepted=True ) ) cmds = wf_ctrl.continue_workflow() task2_ex.processed = True self.assertEqual(0, len(cmds)) def test_continue_workflow_no_start_tasks(self): wf_text = """--- version: '2.0' wf: description: > Invalid workflow that doesn't have start tasks (tasks with no inbound connections). type: direct tasks: task1: on-complete: task2 task2: on-complete: task1 """ self.assertRaises(exc.DSLParsingException, self._prepare_test, wf_text) mistral-6.0.0/mistral/tests/unit/hacking/0000775000175100017510000000000013245513604020417 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/hacking/test_checks.py0000666000175100017510000001234513245513262023277 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import textwrap import mock import pep8 from mistral.hacking import checks from mistral.tests.unit import base from mistral.tests.unit.mstrlfixtures import hacking as hacking_fixtures class BaseLoggingCheckTest(base.BaseTest): def setUp(self): super(BaseLoggingCheckTest, self).setUp() self.code_ex = self.useFixture(self.get_fixture()) self.addCleanup(delattr, self, 'code_ex') def get_checker(self): return checks.CheckForLoggingIssues def get_fixture(self): return hacking_fixtures.HackingLogging() # We are patching pep8 so that only the check under test is actually # installed. @mock.patch('pep8._checks', {'physical_line': {}, 'logical_line': {}, 'tree': {}}) def run_check(self, code, checker, filename=None): pep8.register_check(checker) lines = textwrap.dedent(code).strip().splitlines(True) checker = pep8.Checker(filename=filename, lines=lines) with mock.patch('pep8.StandardReport.get_file_results'): checker.check_all() checker.report._deferred_print.sort() return checker.report._deferred_print def _assert_has_errors(self, code, checker, expected_errors=None, filename=None): # Pull out the parts of the error that we'll match against. actual_errors = [e[:3] for e in self.run_check(code, checker, filename)] self.assertEqual(expected_errors or [], actual_errors) def _assert_has_no_errors(self, code, checker, filename=None): self._assert_has_errors(code, checker, filename=filename) def test_no_assert_equal_true_false(self): code = """ self.assertEqual(context_is_admin, True) self.assertEqual(context_is_admin, False) self.assertEqual(True, context_is_admin) self.assertEqual(False, context_is_admin) self.assertNotEqual(context_is_admin, True) self.assertNotEqual(context_is_admin, False) self.assertNotEqual(True, context_is_admin) self.assertNotEqual(False, context_is_admin) """ errors = [(1, 0, 'M319'), (2, 0, 'M319'), (3, 0, 'M319'), (4, 0, 'M319'), (5, 0, 'M319'), (6, 0, 'M319'), (7, 0, 'M319'), (8, 0, 'M319')] self._assert_has_errors(code, checks.no_assert_equal_true_false, expected_errors=errors) code = """ self.assertEqual(context_is_admin, stuff) self.assertNotEqual(context_is_admin, stuff) """ self._assert_has_no_errors(code, checks.no_assert_equal_true_false) def test_no_assert_true_false_is_not(self): code = """ self.assertTrue(test is None) self.assertTrue(False is my_variable) self.assertFalse(None is test) self.assertFalse(my_variable is False) """ errors = [(1, 0, 'M320'), (2, 0, 'M320'), (3, 0, 'M320'), (4, 0, 'M320')] self._assert_has_errors(code, checks.no_assert_true_false_is_not, expected_errors=errors) def test_check_python3_xrange(self): func = checks.check_python3_xrange self.assertEqual(1, len(list(func('for i in xrange(10)')))) self.assertEqual(1, len(list(func('for i in xrange (10)')))) self.assertEqual(0, len(list(func('for i in range(10)')))) self.assertEqual(0, len(list(func('for i in six.moves.range(10)')))) def test_dict_iteritems(self): self.assertEqual(1, len(list(checks.check_python3_no_iteritems( "obj.iteritems()")))) self.assertEqual(0, len(list(checks.check_python3_no_iteritems( "six.iteritems(ob))")))) def test_dict_iterkeys(self): self.assertEqual(1, len(list(checks.check_python3_no_iterkeys( "obj.iterkeys()")))) self.assertEqual(0, len(list(checks.check_python3_no_iterkeys( "six.iterkeys(ob))")))) def test_dict_itervalues(self): self.assertEqual(1, len(list(checks.check_python3_no_itervalues( "obj.itervalues()")))) self.assertEqual(0, len(list(checks.check_python3_no_itervalues( "six.itervalues(ob))")))) class TestLoggingWithWarn(BaseLoggingCheckTest): def test_using_deprecated_warn(self): data = self.code_ex.assert_not_using_deprecated_warn code = self.code_ex.shared_imports + data['code'] errors = data['expected_errors'] self._assert_has_errors(code, checks.CheckForLoggingIssues, expected_errors=errors) mistral-6.0.0/mistral/tests/unit/hacking/__init__.py0000666000175100017510000000000013245513262022520 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/test_exception_base.py0000666000175100017510000000327513245513262023425 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace Hosting. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import six from mistral import exceptions from mistral.tests.unit import base class ExceptionTestCase(base.BaseTest): """Test cases for exception code.""" def test_nf_with_message(self): exc = exceptions.DBEntityNotFoundError('check_for_this') self.assertIn('check_for_this', six.text_type(exc)) self.assertEqual(404, exc.http_code) def test_nf_with_no_message(self): exc = exceptions.DBEntityNotFoundError() self.assertIn("Object not found", six.text_type(exc)) self.assertEqual(404, exc.http_code,) def test_duplicate_obj_code(self): exc = exceptions.DBDuplicateEntryError() self.assertIn("Database object already exists", six.text_type(exc)) self.assertEqual(409, exc.http_code,) def test_default_code(self): exc = exceptions.EngineException() self.assertEqual(500, exc.http_code) def test_default_message(self): exc = exceptions.EngineException() self.assertIn("An unknown exception occurred", six.text_type(exc)) mistral-6.0.0/mistral/tests/unit/lang/0000775000175100017510000000000013245513604017734 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/lang/v2/0000775000175100017510000000000013245513604020263 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/lang/v2/test_tasks.py0000666000175100017510000006635213245513262023037 0ustar zuulzuul00000000000000# Copyright 2015 - Huawei Technologies Co. Ltd # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.lang.v2 import workflows from mistral.tests.unit.lang.v2 import base as v2_base from mistral import utils class TaskSpecValidation(v2_base.WorkflowSpecValidationTestCase): def test_type_injection(self): tests = [ ({'type': 'direct'}, False), ({'type': 'reverse'}, False) ] for wf_type, expect_error in tests: overlay = {'test': wf_type} wfs_spec = self._parse_dsl_spec(add_tasks=True, changes=overlay, expect_error=expect_error) if not expect_error: self.assertIsInstance(wfs_spec, workflows.WorkflowListSpec) self.assertEqual(1, len(wfs_spec.get_workflows())) wf_spec = wfs_spec.get_workflows()[0] self.assertEqual(wf_type['type'], wf_spec.get_type()) for task in wf_spec.get_tasks(): self.assertEqual(task._data['type'], wf_type['type']) def test_action_or_workflow(self): tests = [ ({'action': 'std.noop'}, False), ({'action': 'std.http url="openstack.org"'}, False), ({'action': 'std.http url="openstack.org" timeout=10'}, False), ({'action': 'std.http url=<% $.url %>'}, False), ({'action': 'std.http url=<% $.url %> timeout=<% $.t %>'}, False), ({'action': 'std.http url=<% * %>'}, True), ({'action': 'std.http url={{ _.url }}'}, False), ({'action': 'std.http url={{ _.url }} timeout={{ _.t }}'}, False), ({'action': 'std.http url={{ $ }}'}, True), ({'workflow': 'test.wf'}, False), ({'workflow': 'test.wf k1="v1"'}, False), ({'workflow': 'test.wf k1="v1" k2="v2"'}, False), ({'workflow': 'test.wf k1=<% $.v1 %>'}, False), ({'workflow': 'test.wf k1=<% $.v1 %> k2=<% $.v2 %>'}, False), ({'workflow': 'test.wf k1=<% * %>'}, True), ({'workflow': 'test.wf k1={{ _.v1 }}'}, False), ({'workflow': 'test.wf k1={{ _.v1 }} k2={{ _.v2 }}'}, False), ({'workflow': 'test.wf k1={{ $ }}'}, True), ({'action': 'std.noop', 'workflow': 'test.wf'}, True), ({'action': 123}, True), ({'workflow': 123}, True), ({'action': ''}, True), ({'workflow': ''}, True), ({'action': None}, True), ({'workflow': None}, True) ] for task, expect_error in tests: overlay = {'test': {'tasks': {'task1': task}}} self._parse_dsl_spec( add_tasks=False, changes=overlay, expect_error=expect_error ) def test_inputs(self): tests = [ ({'input': ''}, True), ({'input': {}}, True), ({'input': None}, True), ({'input': {'k1': 'v1'}}, False), ({'input': {'k1': '<% $.v1 %>'}}, False), ({'input': {'k1': '<% 1 + 2 %>'}}, False), ({'input': {'k1': '<% * %>'}}, True), ({'input': {'k1': '{{ _.v1 }}'}}, False), ({'input': {'k1': '{{ 1 + 2 }}'}}, False), ({'input': {'k1': '{{ * }}'}}, True) ] for task_input, expect_error in tests: overlay = {'test': {'tasks': {'task1': {'action': 'test.mock'}}}} utils.merge_dicts(overlay['test']['tasks']['task1'], task_input) self._parse_dsl_spec( add_tasks=False, changes=overlay, expect_error=expect_error ) def test_with_items(self): tests = [ ({'with-items': ''}, True), ({'with-items': []}, True), ({'with-items': ['']}, True), ({'with-items': None}, True), ({'with-items': 12345}, True), ({'with-items': 'x in y'}, True), ({'with-items': '<% $.y %>'}, True), ({'with-items': 'x in <% $.y %>'}, False), ({'with-items': ['x in [1, 2, 3]']}, False), ({'with-items': ['x in <% $.y %>']}, False), ({'with-items': ['x in <% $.y %>', 'i in [1, 2, 3]']}, False), ({'with-items': ['x in <% $.y %>', 'i in <% $.j %>']}, False), ({'with-items': ['x in <% * %>']}, True), ({'with-items': ['x in <% $.y %>', 'i in <% * %>']}, True), ({'with-items': '{{ _.y }}'}, True), ({'with-items': 'x in {{ _.y }}'}, False), ({'with-items': ['x in [1, 2, 3]']}, False), ({'with-items': ['x in {{ _.y }}']}, False), ({'with-items': ['x in {{ _.y }}', 'i in [1, 2, 3]']}, False), ({'with-items': ['x in {{ _.y }}', 'i in {{ _.j }}']}, False), ({'with-items': ['x in {{ * }}']}, True), ({'with-items': ['x in {{ _.y }}', 'i in {{ * }}']}, True) ] for with_item, expect_error in tests: overlay = {'test': {'tasks': {'get': with_item}}} self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_publish(self): tests = [ ({'publish': ''}, True), ({'publish': {}}, True), ({'publish': None}, True), ({'publish': {'k1': 'v1'}}, False), ({'publish': {'k1': '<% $.v1 %>'}}, False), ({'publish': {'k1': '<% 1 + 2 %>'}}, False), ({'publish': {'k1': '<% * %>'}}, True), ({'publish': {'k1': '{{ _.v1 }}'}}, False), ({'publish': {'k1': '{{ 1 + 2 }}'}}, False), ({'publish': {'k1': '{{ * }}'}}, True) ] for output, expect_error in tests: overlay = {'test': {'tasks': {'task1': {'action': 'test.mock'}}}} utils.merge_dicts(overlay['test']['tasks']['task1'], output) self._parse_dsl_spec( add_tasks=False, changes=overlay, expect_error=expect_error ) def test_publish_on_error(self): tests = [ ({'publish-on-error': ''}, True), ({'publish-on-error': {}}, True), ({'publish-on-error': None}, True), ({'publish-on-error': {'k1': 'v1'}}, False), ({'publish-on-error': {'k1': '<% $.v1 %>'}}, False), ({'publish-on-error': {'k1': '<% 1 + 2 %>'}}, False), ({'publish-on-error': {'k1': '<% * %>'}}, True), ({'publish-on-error': {'k1': '{{ _.v1 }}'}}, False), ({'publish-on-error': {'k1': '{{ 1 + 2 }}'}}, False), ({'publish-on-error': {'k1': '{{ * }}'}}, True) ] for output, expect_error in tests: overlay = {'test': {'tasks': {'task1': {'action': 'test.mock'}}}} utils.merge_dicts(overlay['test']['tasks']['task1'], output) self._parse_dsl_spec( add_tasks=False, changes=overlay, expect_error=expect_error ) def test_policies(self): tests = [ ({'retry': {'count': 3, 'delay': 1}}, False), ({'retry': { 'continue-on': '<% 1 %>', 'delay': 2, 'break-on': '<% 1 %>', 'count': 2 }}, False), ({'retry': { 'count': 3, 'delay': 1, 'continue-on': '<% 1 %>' }}, False), ({'retry': {'count': '<% 3 %>', 'delay': 1}}, False), ({'retry': {'count': '<% * %>', 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': '<% 1 %>'}}, False), ({'retry': {'count': 3, 'delay': '<% * %>'}}, True), ({'retry': { 'continue-on': '{{ 1 }}', 'delay': 2, 'break-on': '{{ 1 }}', 'count': 2 }}, False), ({'retry': { 'count': 3, 'delay': 1, 'continue-on': '{{ 1 }}' }}, False), ({'retry': {'count': '{{ 3 }}', 'delay': 1}}, False), ({'retry': {'count': '{{ * }}', 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': '{{ 1 }}'}}, False), ({'retry': {'count': 3, 'delay': '{{ * }}'}}, True), ({'retry': {'count': -3, 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': -1}}, True), ({'retry': {'count': '3', 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': '1'}}, True), ({'retry': 'count=3 delay=1 break-on=<% false %>'}, False), ({'retry': 'count=3 delay=1 break-on={{ false }}'}, False), ({'retry': 'count=3 delay=1'}, False), ({'retry': 'coun=3 delay=1'}, True), ({'retry': None}, True), ({'wait-before': 1}, False), ({'wait-before': '<% 1 %>'}, False), ({'wait-before': '<% * %>'}, True), ({'wait-before': '{{ 1 }}'}, False), ({'wait-before': '{{ * }}'}, True), ({'wait-before': -1}, True), ({'wait-before': 1.0}, True), ({'wait-before': '1'}, True), ({'wait-after': 1}, False), ({'wait-after': '<% 1 %>'}, False), ({'wait-after': '<% * %>'}, True), ({'wait-after': '{{ 1 }}'}, False), ({'wait-after': '{{ * }}'}, True), ({'wait-after': -1}, True), ({'wait-after': 1.0}, True), ({'wait-after': '1'}, True), ({'timeout': 300}, False), ({'timeout': '<% 300 %>'}, False), ({'timeout': '<% * %>'}, True), ({'timeout': '{{ 300 }}'}, False), ({'timeout': '{{ * }}'}, True), ({'timeout': -300}, True), ({'timeout': 300.0}, True), ({'timeout': '300'}, True), ({'pause-before': False}, False), ({'pause-before': '<% False %>'}, False), ({'pause-before': '<% * %>'}, True), ({'pause-before': '{{ False }}'}, False), ({'pause-before': '{{ * }}'}, True), ({'pause-before': 'False'}, True), ({'concurrency': 10}, False), ({'concurrency': '<% 10 %>'}, False), ({'concurrency': '<% * %>'}, True), ({'concurrency': '{{ 10 }}'}, False), ({'concurrency': '{{ * }}'}, True), ({'concurrency': -10}, True), ({'concurrency': 10.0}, True), ({'concurrency': '10'}, True) ] for policy, expect_error in tests: overlay = {'test': {'tasks': {'get': policy}}} self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_direct_transition(self): tests = [ ({'on-success': ['email']}, False), ({'on-success': [{'email': '<% 1 %>'}]}, False), ({'on-success': [{'email': '<% 1 %>'}, 'echo']}, False), ({'on-success': [{'email': '<% $.v1 in $.v2 %>'}]}, False), ({'on-success': [{'email': '<% * %>'}]}, True), ({'on-success': [{'email': '{{ 1 }}'}]}, False), ({'on-success': [{'email': '{{ 1 }}'}, 'echo']}, False), ({'on-success': [{'email': '{{ _.v1 in _.v2 }}'}]}, False), ({'on-success': [{'email': '{{ * }}'}]}, True), ({'on-success': 'email'}, False), ({'on-success': None}, True), ({'on-success': ['']}, True), ({'on-success': []}, True), ({'on-success': ['email', 'email']}, True), ({'on-success': ['email', 12345]}, True), ({'on-error': ['email']}, False), ({'on-error': [{'email': '<% 1 %>'}]}, False), ({'on-error': [{'email': '<% 1 %>'}, 'echo']}, False), ({'on-error': [{'email': '<% $.v1 in $.v2 %>'}]}, False), ({'on-error': [{'email': '<% * %>'}]}, True), ({'on-error': [{'email': '{{ 1 }}'}]}, False), ({'on-error': [{'email': '{{ 1 }}'}, 'echo']}, False), ({'on-error': [{'email': '{{ _.v1 in _.v2 }}'}]}, False), ({'on-error': [{'email': '{{ * }}'}]}, True), ({'on-error': 'email'}, False), ({'on-error': None}, True), ({'on-error': ['']}, True), ({'on-error': []}, True), ({'on-error': ['email', 'email']}, True), ({'on-error': ['email', 12345]}, True), ({'on-complete': ['email']}, False), ({'on-complete': [{'email': '<% 1 %>'}]}, False), ({'on-complete': [{'email': '<% 1 %>'}, 'echo']}, False), ({'on-complete': [{'email': '<% $.v1 in $.v2 %>'}]}, False), ({'on-complete': [{'email': '<% * %>'}]}, True), ({'on-complete': [{'email': '{{ 1 }}'}]}, False), ({'on-complete': [{'email': '{{ 1 }}'}, 'echo']}, False), ({'on-complete': [{'email': '{{ _.v1 in _.v2 }}'}]}, False), ({'on-complete': [{'email': '{{ * }}'}]}, True), ({'on-complete': 'email'}, False), ({'on-complete': None}, True), ({'on-complete': ['']}, True), ({'on-complete': []}, True), ({'on-complete': ['email', 'email']}, True), ({'on-complete': ['email', 12345]}, True) ] for transition, expect_error in tests: overlay = {'test': {'tasks': {}}} utils.merge_dicts(overlay['test']['tasks'], {'get': transition}) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_direct_transition_advanced_schema(self): tests = [ ({'on-success': {'publish': {'var1': 1234}}}, True), ({'on-success': {'publish': {'branch': {'var1': 1234}}}}, False), ( { 'on-success': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} } } }, False ), ( { 'on-success': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': '<% * %>'}, 'atomic': {'atomic_var1': '<% my_func() %>'} } } }, True ), ( { 'on-success': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': 'email' } }, False ), ( { 'on-success': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': ['email'] } }, False ), ( { 'on-success': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% 1 %>'}] } }, False ), ( { 'on-success': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% $.v1 and $.v2 %>'}] } }, False ), ( { 'on-success': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% * %>'}] } }, True ), ({'on-success': {'next': [{'email': '<% $.v1 %>'}]}}, False), ({'on-success': {'next': 'email'}}, False), ({'on-success': {'next': ['email']}}, False), ({'on-success': {'next': [{'email': 'email'}]}}, True), ({'on-error': {'publish': {'var1': 1234}}}, True), ({'on-error': {'publish': {'branch': {'var1': 1234}}}}, False), ( { 'on-error': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} } } }, False ), ( { 'on-error': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': '<% * %>'}, 'atomic': {'atomic_var1': '<% my_func() %>'} } } }, True ), ( { 'on-error': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': 'email' } }, False ), ( { 'on-error': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': ['email'] } }, False ), ( { 'on-error': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% 1 %>'}] } }, False ), ( { 'on-error': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% $.v1 and $.v2 %>'}] } }, False ), ( { 'on-error': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% * %>'}] } }, True ), ({'on-error': {'next': [{'email': '<% $.v1 %>'}]}}, False), ({'on-error': {'next': 'email'}}, False), ({'on-error': {'next': ['email']}}, False), ({'on-error': {'next': [{'email': 'email'}]}}, True), ({'on-complete': {'publish': {'var1': 1234}}}, True), ({'on-complete': {'publish': {'branch': {'var1': 1234}}}}, False), ( { 'on-complete': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} } } }, False ), ( { 'on-complete': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': '<% * %>'}, 'atomic': {'atomic_var1': '<% my_func() %>'} } } }, True ), ( { 'on-complete': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': 'email' } }, False ), ( { 'on-complete': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': ['email'] } }, False ), ( { 'on-complete': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% 1 %>'}] } }, False ), ( { 'on-complete': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% $.v1 and $.v2 %>'}] } }, False ), ( { 'on-complete': { 'publish': { 'branch': {'var1': 1234}, 'global': {'global_var1': 'val'}, 'atomic': {'atomic_var1': '<% my_func() %>'} }, 'next': [{'email': '<% * %>'}] } }, True ), ({'on-complete': {'next': [{'email': '<% $.v1 %>'}]}}, False), ({'on-complete': {'next': 'email'}}, False), ({'on-complete': {'next': ['email']}}, False), ({'on-complete': {'next': [{'email': 'email'}]}}, True) ] for transition, expect_error in tests: overlay = {'test': {'tasks': {}}} utils.merge_dicts(overlay['test']['tasks'], {'get': transition}) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_join(self): tests = [ ({'join': ''}, True), ({'join': None}, True), ({'join': 'all'}, False), ({'join': 'one'}, False), ({'join': 0}, False), ({'join': 2}, False), ({'join': 3}, True), ({'join': '3'}, True), ({'join': -3}, True) ] on_success = {'on-success': ['email']} for join, expect_error in tests: overlay = {'test': {'tasks': {}}} utils.merge_dicts(overlay['test']['tasks'], {'get': on_success}) utils.merge_dicts(overlay['test']['tasks'], {'echo': on_success}) utils.merge_dicts(overlay['test']['tasks'], {'email': join}) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_requires(self): tests = [ ({'requires': ''}, True), ({'requires': []}, True), ({'requires': ['']}, True), ({'requires': None}, True), ({'requires': 12345}, True), ({'requires': ['echo']}, False), ({'requires': ['echo', 'get']}, False), ({'requires': 'echo'}, False), ] for require, expect_error in tests: overlay = {'test': {'tasks': {}}} utils.merge_dicts(overlay['test'], {'type': 'reverse'}) utils.merge_dicts(overlay['test']['tasks'], {'email': require}) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_keep_result(self): tests = [ ({'keep-result': ''}, True), ({'keep-result': []}, True), ({'keep-result': 'asd'}, True), ({'keep-result': None}, True), ({'keep-result': 12345}, True), ({'keep-result': True}, False), ({'keep-result': False}, False), ({'keep-result': "<% 'a' in $.val %>"}, False), ({'keep-result': '<% 1 + 2 %>'}, False), ({'keep-result': '<% * %>'}, True), ({'keep-result': "{{ 'a' in _.val }}"}, False), ({'keep-result': '{{ 1 + 2 }}'}, False), ({'keep-result': '{{ * }}'}, True) ] for keep_result, expect_error in tests: overlay = {'test': {'tasks': {}}} utils.merge_dicts(overlay['test']['tasks'], {'email': keep_result}) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) mistral-6.0.0/mistral/tests/unit/lang/v2/test_workflows.py0000666000175100017510000003723713245513262023747 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import yaml from mistral import exceptions as exc from mistral.tests.unit.lang.v2 import base from mistral import utils class WorkflowSpecValidation(base.WorkflowSpecValidationTestCase): def test_workflow_types(self): tests = [ ({'type': 'direct'}, False), ({'type': 'reverse'}, False), ({'type': 'circular'}, True), ({'type': None}, True) ] for wf_type, expect_error in tests: overlay = {'test': wf_type} self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_direct_workflow(self): overlay = {'test': {'type': 'direct', 'tasks': {}}} join = {'join': 'all'} on_success = {'on-success': ['email']} utils.merge_dicts(overlay['test']['tasks'], {'get': on_success}) utils.merge_dicts(overlay['test']['tasks'], {'echo': on_success}) utils.merge_dicts(overlay['test']['tasks'], {'email': join}) wfs_spec = self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=False ) self.assertEqual(1, len(wfs_spec.get_workflows())) self.assertEqual('test', wfs_spec.get_workflows()[0].get_name()) self.assertEqual('direct', wfs_spec.get_workflows()[0].get_type()) def test_direct_workflow_invalid_task(self): overlay = { 'test': { 'type': 'direct', 'tasks': {} } } requires = {'requires': ['echo', 'get']} utils.merge_dicts(overlay['test']['tasks'], {'email': requires}) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=True ) def test_direct_workflow_no_start_tasks(self): overlay = { 'test': { 'type': 'direct', 'tasks': { 'task1': {'on-complete': 'task2'}, 'task2': {'on-complete': 'task1'} } } } self._parse_dsl_spec( add_tasks=False, changes=overlay, expect_error=True ) def test_direct_workflow_invalid_join(self): tests = [ ({'task3': {'join': 2}}, False), ({'task3': {'join': 5}}, True), ({'task3': {'join': 1}}, False), ({'task3': {'join': 'one'}}, False), ({'task3': {'join': 'all'}}, False), ({'task4': {'join': 'all'}}, True), ({'task4': {'join': 1}}, True), ({'task4': {'join': 'one'}}, True) ] for test in tests: overlay = { 'test': { 'type': 'direct', 'tasks': { 'task1': {'on-complete': 'task3'}, 'task2': {'on-complete': 'task3'} } } } utils.merge_dicts(overlay['test']['tasks'], test[0]) self._parse_dsl_spec( add_tasks=False, changes=overlay, expect_error=test[1] ) def test_reverse_workflow(self): overlay = {'test': {'type': 'reverse', 'tasks': {}}} require = {'requires': ['echo', 'get']} utils.merge_dicts(overlay['test']['tasks'], {'email': require}) wfs_spec = self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=False ) self.assertEqual(1, len(wfs_spec.get_workflows())) self.assertEqual('test', wfs_spec.get_workflows()[0].get_name()) self.assertEqual('reverse', wfs_spec.get_workflows()[0].get_type()) def test_reverse_workflow_invalid_task(self): overlay = {'test': {'type': 'reverse', 'tasks': {}}} join = {'join': 'all'} on_success = {'on-success': ['email']} utils.merge_dicts(overlay['test']['tasks'], {'get': on_success}) utils.merge_dicts(overlay['test']['tasks'], {'echo': on_success}) utils.merge_dicts(overlay['test']['tasks'], {'email': join}) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=True ) def test_version_required(self): dsl_dict = copy.deepcopy(self._dsl_blank) dsl_dict.pop('version', None) exception = self.assertRaises( exc.DSLParsingException, self._spec_parser, yaml.safe_dump(dsl_dict) ) self.assertIn("'version' is a required property", str(exception)) def test_version(self): tests = [ ({'version': None}, True), ({'version': ''}, True), ({'version': '2.0'}, False), ({'version': 2.0}, False), ({'version': 2}, False) ] for version, expect_error in tests: self._parse_dsl_spec( add_tasks=True, changes=version, expect_error=expect_error ) def test_inputs(self): tests = [ ({'input': ['var1', 'var2']}, False), ({'input': ['var1', 'var1']}, True), ({'input': [12345]}, True), ({'input': [None]}, True), ({'input': ['']}, True), ({'input': None}, True), ({'input': []}, True), ({'input': ['var1', {'var2': 2}]}, False), ({'input': [{'var1': 1}, {'var2': 2}]}, False), ({'input': [{'var1': None}]}, False), ({'input': [{'var1': 1}, {'var1': 1}]}, True), ({'input': [{'var1': 1, 'var2': 2}]}, True) ] for wf_input, expect_error in tests: overlay = {'test': wf_input} self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_outputs(self): tests = [ ({'output': {'k1': 'a', 'k2': 1, 'k3': True, 'k4': None}}, False), ({'output': {'k1': '<% $.v1 %>'}}, False), ({'output': {'k1': '<% 1 + 2 %>'}}, False), ({'output': {'k1': '<% * %>'}}, True), ({'output': []}, True), ({'output': 'whatever'}, True), ({'output': None}, True), ({'output': {}}, True) ] for wf_output, expect_error in tests: overlay = {'test': wf_output} self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_vars(self): tests = [ ({'vars': {'v1': 'a', 'v2': 1, 'v3': True, 'v4': None}}, False), ({'vars': {'v1': '<% $.input_var1 %>'}}, False), ({'vars': {'v1': '<% 1 + 2 %>'}}, False), ({'vars': {'v1': '<% * %>'}}, True), ({'vars': {'v1': '{{ _.input_var1 }}'}}, False), ({'vars': {'v1': '{{ 1 + 2 }}'}}, False), ({'vars': {'v1': '{{ * }}'}}, True), ({'vars': []}, True), ({'vars': 'whatever'}, True), ({'vars': None}, True), ({'vars': {}}, True) ] for wf_vars, expect_error in tests: overlay = {'test': wf_vars} self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_tasks_required(self): exception = self._parse_dsl_spec( add_tasks=False, expect_error=True ) self.assertIn("'tasks' is a required property", str(exception)) def test_tasks(self): tests = [ ({'tasks': {}}, True), ({'tasks': None}, True), ({'tasks': self._dsl_tasks}, False) ] for wf_tasks, expect_error in tests: overlay = {'test': wf_tasks} self._parse_dsl_spec( add_tasks=False, changes=overlay, expect_error=expect_error ) def test_task_defaults(self): tests = [ ({'on-success': ['email']}, False), ({'on-success': [{'email': '<% 1 %>'}]}, False), ({'on-success': [{'email': '<% 1 %>'}, 'echo']}, False), ({'on-success': [{'email': '<% $.v1 in $.v2 %>'}]}, False), ({'on-success': [{'email': '<% * %>'}]}, True), ({'on-success': [{'email': '{{ 1 }}'}]}, False), ({'on-success': [{'email': '{{ 1 }}'}, 'echo']}, False), ({'on-success': [{'email': '{{ _.v1 in _.v2 }}'}]}, False), ({'on-success': [{'email': '{{ * }}'}]}, True), ({'on-success': 'email'}, False), ({'on-success': None}, True), ({'on-success': ['']}, True), ({'on-success': []}, True), ({'on-success': ['email', 'email']}, True), ({'on-success': ['email', 12345]}, True), ({'on-error': ['email']}, False), ({'on-error': [{'email': '<% 1 %>'}]}, False), ({'on-error': [{'email': '<% 1 %>'}, 'echo']}, False), ({'on-error': [{'email': '<% $.v1 in $.v2 %>'}]}, False), ({'on-error': [{'email': '<% * %>'}]}, True), ({'on-error': [{'email': '{{ 1 }}'}]}, False), ({'on-error': [{'email': '{{ 1 }}'}, 'echo']}, False), ({'on-error': [{'email': '{{ _.v1 in _.v2 }}'}]}, False), ({'on-error': [{'email': '{{ * }}'}]}, True), ({'on-error': 'email'}, False), ({'on-error': None}, True), ({'on-error': ['']}, True), ({'on-error': []}, True), ({'on-error': ['email', 'email']}, True), ({'on-error': ['email', 12345]}, True), ({'on-complete': ['email']}, False), ({'on-complete': [{'email': '<% 1 %>'}]}, False), ({'on-complete': [{'email': '<% 1 %>'}, 'echo']}, False), ({'on-complete': [{'email': '<% $.v1 in $.v2 %>'}]}, False), ({'on-complete': [{'email': '<% * %>'}]}, True), ({'on-complete': [{'email': '{{ 1 }}'}]}, False), ({'on-complete': [{'email': '{{ 1 }}'}, 'echo']}, False), ({'on-complete': [{'email': '{{ _.v1 in _.v2 }}'}]}, False), ({'on-complete': [{'email': '{{ * }}'}]}, True), ({'on-complete': 'email'}, False), ({'on-complete': None}, True), ({'on-complete': ['']}, True), ({'on-complete': []}, True), ({'on-complete': ['email', 'email']}, True), ({'on-complete': ['email', 12345]}, True), ({'requires': ''}, True), ({'requires': []}, True), ({'requires': ['']}, True), ({'requires': None}, True), ({'requires': 12345}, True), ({'requires': ['echo']}, False), ({'requires': ['echo', 'get']}, False), ({'requires': 'echo'}, False), ({'retry': {'count': 3, 'delay': 1}}, False), ({'retry': {'count': '<% 3 %>', 'delay': 1}}, False), ({'retry': {'count': '<% * %>', 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': '<% 1 %>'}}, False), ({'retry': {'count': 3, 'delay': '<% * %>'}}, True), ({'retry': {'count': '{{ 3 }}', 'delay': 1}}, False), ({'retry': {'count': '{{ * }}', 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': '{{ 1 }}'}}, False), ({'retry': {'count': 3, 'delay': '{{ * }}'}}, True), ({'retry': {'count': -3, 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': -1}}, True), ({'retry': {'count': '3', 'delay': 1}}, True), ({'retry': {'count': 3, 'delay': '1'}}, True), ({'retry': None}, True), ({'wait-before': 1}, False), ({'wait-before': '<% 1 %>'}, False), ({'wait-before': '<% * %>'}, True), ({'wait-before': '{{ 1 }}'}, False), ({'wait-before': '{{ * }}'}, True), ({'wait-before': -1}, True), ({'wait-before': 1.0}, True), ({'wait-before': '1'}, True), ({'wait-after': 1}, False), ({'wait-after': '<% 1 %>'}, False), ({'wait-after': '<% * %>'}, True), ({'wait-after': '{{ 1 }}'}, False), ({'wait-after': '{{ * }}'}, True), ({'wait-after': -1}, True), ({'wait-after': 1.0}, True), ({'wait-after': '1'}, True), ({'timeout': 300}, False), ({'timeout': '<% 300 %>'}, False), ({'timeout': '<% * %>'}, True), ({'timeout': '{{ 300 }}'}, False), ({'timeout': '{{ * }}'}, True), ({'timeout': -300}, True), ({'timeout': 300.0}, True), ({'timeout': '300'}, True), ({'pause-before': False}, False), ({'pause-before': '<% False %>'}, False), ({'pause-before': '<% * %>'}, True), ({'pause-before': '{{ False }}'}, False), ({'pause-before': '{{ * }}'}, True), ({'pause-before': 'False'}, True), ({'concurrency': 10}, False), ({'concurrency': '<% 10 %>'}, False), ({'concurrency': '<% * %>'}, True), ({'concurrency': '{{ 10 }}'}, False), ({'concurrency': '{{ * }}'}, True), ({'concurrency': -10}, True), ({'concurrency': 10.0}, True), ({'concurrency': '10'}, True) ] for default, expect_error in tests: overlay = {'test': {'task-defaults': {}}} utils.merge_dicts(overlay['test']['task-defaults'], default) self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) def test_invalid_item(self): overlay = {'name': 'invalid'} exception = self._parse_dsl_spec(changes=overlay, expect_error=True) self.assertIn("Invalid DSL", str(exception)) def test_invalid_name(self): invalid_wf = { 'version': '2.0', 'b98180ba-48a0-4e26-ab2e-50dc224f6fd1': { 'type': 'direct', 'tasks': {'t1': {'action': 'std.noop'}} } } dsl_yaml = yaml.safe_dump(invalid_wf, default_flow_style=False) exception = self.assertRaises( exc.InvalidModelException, self._spec_parser, dsl_yaml ) self.assertIn( "Workflow name cannot be in the format of UUID", str(exception) ) def test_tags(self): tests = [ ({'tags': ''}, True), ({'tags': []}, True), ({'tags': ['']}, True), ({'tags': ['tag']}, False), ({'tags': ['tag', 'tag']}, True), ({'tags': None}, True) ] for wf_tags, expect_error in tests: overlay = {'test': wf_tags} self._parse_dsl_spec( add_tasks=True, changes=overlay, expect_error=expect_error ) mistral-6.0.0/mistral/tests/unit/lang/v2/base.py0000666000175100017510000000752013245513262021555 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import yaml from mistral import exceptions as exc from mistral.lang import parser as spec_parser from mistral.tests.unit import base from mistral import utils class WorkflowSpecValidationTestCase(base.BaseTest): def __init__(self, *args, **kwargs): super(WorkflowSpecValidationTestCase, self).__init__(*args, **kwargs) # The relative resource path is ./mistral/tests/resources/workbook/v2. self._resource_path = 'workbook/v2' self._spec_parser = spec_parser.get_workflow_list_spec_from_yaml self._dsl_blank = { 'version': '2.0', 'test': { 'type': 'direct' } } self._dsl_tasks = { 'get': { 'action': 'std.http', 'input': { 'url': 'https://www.openstack.org' } }, 'echo': { 'action': 'std.echo', 'input': { 'output': 'This is a test.' } }, 'email': { 'action': 'std.email', 'input': { 'from_addr': 'mistral@example.com', 'to_addrs': ['admin@example.com'], 'subject': 'Test', 'body': 'This is a test.', 'smtp_server': 'localhost', 'smtp_password': 'password' } } } def _parse_dsl_spec(self, dsl_file=None, add_tasks=False, changes=None, expect_error=False): if dsl_file and add_tasks: raise Exception('The add_tasks option is not a valid ' 'combination with the dsl_file option.') if dsl_file: dsl_yaml = base.get_resource(self._resource_path + '/' + dsl_file) if changes: dsl_dict = yaml.safe_load(dsl_yaml) utils.merge_dicts(dsl_dict, changes) dsl_yaml = yaml.safe_dump(dsl_dict, default_flow_style=False) else: dsl_dict = copy.deepcopy(self._dsl_blank) if add_tasks: dsl_dict['test']['tasks'] = copy.deepcopy(self._dsl_tasks) if changes: utils.merge_dicts(dsl_dict, changes) dsl_yaml = yaml.safe_dump(dsl_dict, default_flow_style=False) if not expect_error: return self._spec_parser(dsl_yaml) else: return self.assertRaises( exc.DSLParsingException, self._spec_parser, dsl_yaml ) class WorkbookSpecValidationTestCase(WorkflowSpecValidationTestCase): def __init__(self, *args, **kwargs): super(WorkbookSpecValidationTestCase, self).__init__(*args, **kwargs) self._spec_parser = spec_parser.get_workbook_spec_from_yaml self._dsl_blank = { 'version': '2.0', 'name': 'test_wb' } def _parse_dsl_spec(self, dsl_file=None, add_tasks=False, changes=None, expect_error=False): return super(WorkbookSpecValidationTestCase, self)._parse_dsl_spec( dsl_file=dsl_file, add_tasks=False, changes=changes, expect_error=expect_error) mistral-6.0.0/mistral/tests/unit/lang/v2/__init__.py0000666000175100017510000000000013245513262022364 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/lang/v2/test_workbook.py0000666000175100017510000003515613245513262023545 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import re import yaml from mistral import exceptions as exc from mistral.lang.v2 import workbook from mistral.tests.unit.lang.v2 import base class WorkbookSpecValidation(base.WorkbookSpecValidationTestCase): def test_build_valid_workbook_spec(self): wb_spec = self._parse_dsl_spec(dsl_file='my_workbook.yaml') # Workbook. act_specs = wb_spec.get_actions() wf_specs = wb_spec.get_workflows() self.assertEqual('2.0', wb_spec.get_version()) self.assertEqual('my_workbook', wb_spec.get_name()) self.assertEqual('This is a test workbook', wb_spec.get_description()) self.assertListEqual(['test', 'v2'], wb_spec.get_tags()) self.assertIsNotNone(act_specs) self.assertIsNotNone(wf_specs) # Actions. action_spec = act_specs.get('action1') self.assertIsNotNone(action_spec) self.assertEqual('2.0', action_spec.get_version()) self.assertEqual('action1', action_spec.get_name()) self.assertEqual( 'This is a test ad-hoc action', action_spec.get_description() ) self.assertListEqual(['test', 'v2'], action_spec.get_tags()) self.assertEqual('std.echo', action_spec.get_base()) self.assertDictEqual( {'output': 'Hello <% $.name %>!'}, action_spec.get_base_input() ) self.assertDictEqual({}, action_spec.get_input()) self.assertEqual('<% $ %>', action_spec.get_output()) # Workflows. self.assertEqual(2, len(wf_specs)) wf1_spec = wf_specs.get('wf1') self.assertEqual('2.0', wf1_spec.get_version()) self.assertEqual('wf1', wf1_spec.get_name()) self.assertEqual( 'This is a test workflow', wf1_spec.get_description() ) self.assertListEqual(['test', 'v2'], wf1_spec.get_tags()) self.assertEqual('reverse', wf1_spec.get_type()) self.assertEqual(2, len(wf1_spec.get_tasks())) # Tasks. task1_spec = wf1_spec.get_tasks().get('task1') self.assertIsNotNone(task1_spec) self.assertEqual('2.0', task1_spec.get_version()) self.assertEqual('task1', task1_spec.get_name()) self.assertEqual('This is a test task', task1_spec.get_description()) self.assertEqual('action1', task1_spec.get_action_name()) self.assertEqual({'name': '<% $.name %>'}, task1_spec.get_input()) policies = task1_spec.get_policies() self.assertEqual(2, policies.get_wait_before()) self.assertEqual(5, policies.get_wait_after()) self.assertEqual(3, policies.get_concurrency()) retry_spec = policies.get_retry() self.assertEqual(10, retry_spec.get_count()) self.assertEqual(30, retry_spec.get_delay()) self.assertEqual('<% $.my_val = 10 %>', retry_spec.get_break_on()) task2_spec = wf1_spec.get_tasks().get('task2') self.assertIsNotNone(task2_spec) self.assertEqual('2.0', task2_spec.get_version()) self.assertEqual('task2', task2_spec.get_name()) self.assertEqual('std.echo', task2_spec.get_action_name()) self.assertIsNone(task2_spec.get_workflow_name()) self.assertEqual( {'output': 'Thanks <% $.name %>!'}, task2_spec.get_input() ) wf2_spec = wf_specs.get('wf2') self.assertEqual('2.0', wf2_spec.get_version()) self.assertEqual('wf2', wf2_spec.get_name()) self.assertListEqual(['test', 'v2'], wf2_spec.get_tags()) self.assertEqual('direct', wf2_spec.get_type()) self.assertEqual(11, len(wf2_spec.get_tasks())) task_defaults_spec = wf2_spec.get_task_defaults() self.assertListEqual( [('fail', '<% $.my_val = 0 %>', {})], task_defaults_spec.get_on_error().get_next() ) self.assertListEqual( [('pause', '', {})], task_defaults_spec.get_on_success().get_next() ) self.assertListEqual( [('succeed', '', {})], task_defaults_spec.get_on_complete().get_next() ) task3_spec = wf2_spec.get_tasks().get('task3') self.assertIsNotNone(task3_spec) self.assertEqual('2.0', task3_spec.get_version()) self.assertEqual('task3', task3_spec.get_name()) self.assertIsNone(task3_spec.get_action_name()) self.assertEqual('wf1', task3_spec.get_workflow_name()) self.assertEqual( { 'name': 'John Doe', 'age': 32, 'param1': None, 'param2': False }, task3_spec.get_input() ) self.assertListEqual( [('task4', '<% $.my_val = 1 %>', {})], task3_spec.get_on_error().get_next() ) self.assertListEqual( [('task5', '<% $.my_val = 2 %>', {})], task3_spec.get_on_success().get_next() ) self.assertListEqual( [('task6', '<% $.my_val = 3 %>', {})], task3_spec.get_on_complete().get_next() ) task7_spec = wf2_spec.get_tasks().get('task7') self.assertEqual( { 'is_true': True, 'object_list': [1, None, 'str'], 'is_string': '50' }, task7_spec.get_input() ) self.assertEqual( {'vm_info': '<% $.vms %>'}, task7_spec.get_with_items() ) task8_spec = wf2_spec.get_tasks().get('task8') self.assertEqual( { 'itemX': '<% $.arrayI %>', "itemY": '<% $.arrayJ %>' }, task8_spec.get_with_items() ) self.assertEqual( { 'expr_list': ['<% $.v %>', '<% $.k %>'], 'expr': '<% $.value %>', }, task8_spec.get_input() ) self.assertEqual('nova', task8_spec.get_target()) task9_spec = wf2_spec.get_tasks().get('task9') self.assertEqual('all', task9_spec.get_join()) task10_spec = wf2_spec.get_tasks().get('task10') self.assertEqual(2, task10_spec.get_join()) task11_spec = wf2_spec.get_tasks().get('task11') self.assertEqual('one', task11_spec.get_join()) task12_spec = wf2_spec.get_tasks().get('task12') self.assertDictEqual( { 'url': 'http://site.com?q=<% $.query %>', 'params': '' }, task12_spec.get_input() ) task13_spec = wf2_spec.get_tasks().get('task13') self.assertEqual('std.noop', task13_spec.get_action_name()) self.assertEqual('No-op task', task13_spec.get_description()) def test_adhoc_action_with_base_in_one_string(self): wb_spec = self._parse_dsl_spec(dsl_file='my_workbook.yaml') act_specs = wb_spec.get_actions() action_spec = act_specs.get("action2") self.assertEqual('std.echo', action_spec.get_base()) self.assertEqual( {'output': 'Echo output'}, action_spec.get_base_input() ) def test_spec_to_dict(self): wb_spec = self._parse_dsl_spec(dsl_file='my_workbook.yaml') d = wb_spec.to_dict() self.assertEqual('2.0', d['version']) self.assertEqual('2.0', d['workflows']['version']) self.assertEqual('2.0', d['workflows']['wf1']['version']) def test_version_required(self): dsl_dict = copy.deepcopy(self._dsl_blank) dsl_dict.pop('version', None) # TODO(m4dcoder): Check required property error when v1 is deprecated. # The version property is not required for v1 workbook whereas it is # a required property in v2. For backward compatibility, if no version # is not provided, the workbook spec parser defaults to v1 and the # required property exception is not triggered. However, a different # spec validation error returns due to drastically different schema # between workbook versions. self.assertRaises( exc.DSLParsingException, self._spec_parser, yaml.safe_dump(dsl_dict) ) def test_version(self): tests = [ ({'version': None}, True), ({'version': ''}, True), ({'version': '1.0'}, True), ({'version': '2.0'}, False), ({'version': 2.0}, False), ({'version': 2}, False) ] for version, expect_error in tests: self._parse_dsl_spec(changes=version, expect_error=expect_error) def test_name_required(self): dsl_dict = copy.deepcopy(self._dsl_blank) dsl_dict.pop('name', None) exception = self.assertRaises( exc.DSLParsingException, self._spec_parser, yaml.safe_dump(dsl_dict) ) self.assertIn("'name' is a required property", str(exception)) def test_name(self): tests = [ ({'name': ''}, True), ({'name': None}, True), ({'name': 12345}, True), ({'name': 'foobar'}, False) ] for name, expect_error in tests: self._parse_dsl_spec(changes=name, expect_error=expect_error) def test_description(self): tests = [ ({'description': ''}, True), ({'description': None}, True), ({'description': 12345}, True), ({'description': 'This is a test workflow.'}, False) ] for description, expect_error in tests: self._parse_dsl_spec( changes=description, expect_error=expect_error ) def test_tags(self): tests = [ ({'tags': ''}, True), ({'tags': ['']}, True), ({'tags': None}, True), ({'tags': 12345}, True), ({'tags': ['foo', 'bar']}, False), ({'tags': ['foobar', 'foobar']}, True) ] for tags, expect_error in tests: self._parse_dsl_spec(changes=tags, expect_error=expect_error) def test_actions(self): actions = { 'version': '2.0', 'noop': { 'base': 'std.noop' }, 'echo': { 'base': 'std.echo' } } tests = [ ({'actions': []}, True), ({'actions': {}}, True), ({'actions': None}, True), ({'actions': {'version': None}}, True), ({'actions': {'version': ''}}, True), ({'actions': {'version': '1.0'}}, True), ({'actions': {'version': '2.0'}}, False), ({'actions': {'version': 2.0}}, False), ({'actions': {'version': 2}}, False), ({'actions': {'noop': actions['noop']}}, False), ({'actions': {'version': '2.0', 'noop': 'std.noop'}}, True), ({'actions': actions}, False) ] for adhoc_actions, expect_error in tests: self._parse_dsl_spec( changes=adhoc_actions, expect_error=expect_error ) def test_workflows(self): workflows = { 'version': '2.0', 'wf1': { 'tasks': { 'noop-task': { 'action': 'std.noop' } } }, 'wf2': { 'tasks': { 'echo': { 'action': 'std.echo output="This is a test."' } } } } tests = [ # ({'workflows': []}, True), # ({'workflows': {}}, True), # ({'workflows': None}, True), # ({'workflows': {'version': None}}, True), # ({'workflows': {'version': ''}}, True), # ({'workflows': {'version': '1.0'}}, True), # ({'workflows': {'version': '2.0'}}, False), # ({'workflows': {'version': 2.0}}, False), # ({'workflows': {'version': 2}}, False), # ({'workflows': {'wf1': workflows['wf1']}}, False), ({'workflows': {'version': '2.0', 'wf1': 'wf1'}}, True), ({'workflows': workflows}, False) ] for workflows, expect_error in tests: self._parse_dsl_spec(changes=workflows, expect_error=expect_error) def test_workflow_name_validation(self): wb_spec = self._parse_dsl_spec(dsl_file='workbook_schema_test.yaml') d = wb_spec.to_dict() self.assertEqual('2.0', d['version']) self.assertEqual('2.0', d['workflows']['version']) workflow_names = ['workflowversion', 'versionworkflow', 'workflowversionworkflow', 'version_workflow'] action_names = ['actionversion', 'versionaction', 'actionversionaction'] for name in workflow_names: self.assertEqual('2.0', d['workflows'][name]['version']) self.assertEqual(name, d['workflows'][name]['name']) for name in action_names: self.assertEqual('2.0', d['actions'][name]['version']) self.assertEqual(name, d['actions'][name]['name']) def test_name_regex(self): # We want to match a string containing version at any point. valid_names = ( "workflowversion", "versionworkflow", "workflowversionworkflow", "version_workflow", "version-workflow", ) for valid in valid_names: result = re.match(workbook.NON_VERSION_WORD_REGEX, valid) self.assertIsNotNone( result, "Expected match for: {}".format(valid) ) # ... except, we don't want to match a string that isn't just one word # or is exactly "version" invalid_names = ("version", "my workflow") for invalid in invalid_names: result = re.match(workbook.NON_VERSION_WORD_REGEX, invalid) self.assertIsNone( result, "Didn't expected match for: {}".format(invalid) ) mistral-6.0.0/mistral/tests/unit/lang/v2/test_actions.py0000666000175100017510000001050113245513262023333 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from mistral.tests.unit.lang.v2 import base from mistral import utils class ActionSpecValidation(base.WorkbookSpecValidationTestCase): def test_base_required(self): actions = {'actions': {'a1': {}}} exception = self._parse_dsl_spec(changes=actions, expect_error=True) self.assertIn("'base' is a required property", str(exception)) def test_base(self): tests = [ ({'actions': {'a1': {'base': ''}}}, True), ({'actions': {'a1': {'base': None}}}, True), ({'actions': {'a1': {'base': 12345}}}, True), ({'actions': {'a1': {'base': 'std.noop'}}}, False), ({'actions': {'a1': {'base': 'std.echo output="foo"'}}}, False), ({'actions': {'a1': {'base': 'std.echo output="<% $.x %>"'}}}, False), ({'actions': {'a1': {'base': 'std.echo output="<% * %>"'}}}, True), ({'actions': {'a1': {'base': 'std.echo output="{{ _.x }}"'}}}, False), ({'actions': {'a1': {'base': 'std.echo output="{{ * }}"'}}}, True) ] for actions, expect_error in tests: self._parse_dsl_spec(changes=actions, expect_error=expect_error) def test_base_input(self): tests = [ ({'base-input': {}}, True), ({'base-input': None}, True), ({'base-input': {'k1': 'v1', 'k2': '<% $.v2 %>'}}, False), ({'base-input': {'k1': 'v1', 'k2': '<% * %>'}}, True), ({'base-input': {'k1': 'v1', 'k2': '{{ _.v2 }}'}}, False), ({'base-input': {'k1': 'v1', 'k2': '{{ * }}'}}, True) ] actions = { 'a1': { 'base': 'foobar' } } for base_inputs, expect_error in tests: overlay = {'actions': copy.deepcopy(actions)} utils.merge_dicts(overlay['actions']['a1'], base_inputs) self._parse_dsl_spec(changes=overlay, expect_error=expect_error) def test_input(self): tests = [ ({'input': ''}, True), ({'input': []}, True), ({'input': ['']}, True), ({'input': None}, True), ({'input': ['k1', 'k2']}, False), ({'input': ['k1', 12345]}, True), ({'input': ['k1', {'k2': 2}]}, False), ({'input': [{'k1': 1}, {'k2': 2}]}, False), ({'input': [{'k1': None}]}, False), ({'input': [{'k1': 1}, {'k1': 1}]}, True), ({'input': [{'k1': 1, 'k2': 2}]}, True) ] actions = { 'a1': { 'base': 'foobar' } } for inputs, expect_error in tests: overlay = {'actions': copy.deepcopy(actions)} utils.merge_dicts(overlay['actions']['a1'], inputs) self._parse_dsl_spec(changes=overlay, expect_error=expect_error) def test_output(self): tests = [ ({'output': None}, False), ({'output': False}, False), ({'output': 12345}, False), ({'output': 0.12345}, False), ({'output': 'foobar'}, False), ({'output': '<% $.x %>'}, False), ({'output': '<% * %>'}, True), ({'output': '{{ _.x }}'}, False), ({'output': '{{ * }}'}, True), ({'output': ['v1']}, False), ({'output': {'k1': 'v1'}}, False) ] actions = { 'a1': { 'base': 'foobar' } } for outputs, expect_error in tests: overlay = {'actions': copy.deepcopy(actions)} utils.merge_dicts(overlay['actions']['a1'], outputs) self._parse_dsl_spec(changes=overlay, expect_error=expect_error) mistral-6.0.0/mistral/tests/unit/lang/__init__.py0000666000175100017510000000000013245513262022035 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/lang/test_spec_caching.py0000666000175100017510000001621313245513262023760 0ustar zuulzuul00000000000000# Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.lang import parser as spec_parser from mistral.services import workbooks as wb_service from mistral.services import workflows as wf_service from mistral.tests.unit import base from mistral.workflow import states class SpecificationCachingTest(base.DbTestCase): def test_workflow_spec_caching(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output="Echo" """ wfs = wf_service.create_workflows(wf_text) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(0, spec_parser.get_wf_definition_spec_cache_size()) wf_spec = spec_parser.get_workflow_spec_by_definition_id( wfs[0].id, wfs[0].updated_at ) self.assertIsNotNone(wf_spec) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(1, spec_parser.get_wf_definition_spec_cache_size()) def test_workflow_spec_cache_update_via_workflow_service(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output="Echo" """ wfs = wf_service.create_workflows(wf_text) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(0, spec_parser.get_wf_definition_spec_cache_size()) wf_spec = spec_parser.get_workflow_spec_by_definition_id( wfs[0].id, wfs[0].updated_at ) self.assertEqual(1, len(wf_spec.get_tasks())) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(1, spec_parser.get_wf_definition_spec_cache_size()) # Now update workflow definition and check that cache is updated too. wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output="1" task2: action: std.echo output="2" """ wfs = wf_service.update_workflows(wf_text) self.assertEqual(1, spec_parser.get_wf_definition_spec_cache_size()) wf_spec = spec_parser.get_workflow_spec_by_definition_id( wfs[0].id, wfs[0].updated_at ) self.assertEqual(2, len(wf_spec.get_tasks())) self.assertEqual(2, spec_parser.get_wf_definition_spec_cache_size()) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) def test_workflow_spec_cache_update_via_workbook_service(self): wb_text = """ version: '2.0' name: wb workflows: wf: tasks: task1: action: std.echo output="Echo" """ wb_service.create_workbook_v2(wb_text) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(0, spec_parser.get_wf_definition_spec_cache_size()) wf = db_api.get_workflow_definition('wb.wf') wf_spec = spec_parser.get_workflow_spec_by_definition_id( wf.id, wf.updated_at ) self.assertEqual(1, len(wf_spec.get_tasks())) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(1, spec_parser.get_wf_definition_spec_cache_size()) # Now update workflow definition and check that cache is updated too. wb_text = """ version: '2.0' name: wb workflows: wf: tasks: task1: action: std.echo output="1" task2: action: std.echo output="2" """ wb_service.update_workbook_v2(wb_text) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(1, spec_parser.get_wf_definition_spec_cache_size()) wf = db_api.get_workflow_definition(wf.id) wf_spec = spec_parser.get_workflow_spec_by_definition_id( wf.id, wf.updated_at ) self.assertEqual(2, len(wf_spec.get_tasks())) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(2, spec_parser.get_wf_definition_spec_cache_size()) def test_cache_workflow_spec_by_execution_id(self): wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output="Echo" """ wfs = wf_service.create_workflows(wf_text) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(0, spec_parser.get_wf_definition_spec_cache_size()) wf_def = wfs[0] wf_spec = spec_parser.get_workflow_spec_by_definition_id( wf_def.id, wf_def.updated_at ) self.assertEqual(1, len(wf_spec.get_tasks())) self.assertEqual(0, spec_parser.get_wf_execution_spec_cache_size()) self.assertEqual(1, spec_parser.get_wf_definition_spec_cache_size()) wf_ex = db_api.create_workflow_execution({ 'id': '1-2-3-4', 'name': 'wf', 'workflow_id': wf_def.id, 'spec': wf_spec.to_dict(), 'state': states.RUNNING }) # Check that we can get a valid spec by execution id. wf_spec_by_exec_id = spec_parser.get_workflow_spec_by_execution_id( wf_ex.id ) self.assertEqual(1, len(wf_spec_by_exec_id.get_tasks())) # Now update workflow definition and check that cache is updated too. wf_text = """ version: '2.0' wf: tasks: task1: action: std.echo output="1" task2: action: std.echo output="2" """ wfs = wf_service.update_workflows(wf_text) self.assertEqual(1, spec_parser.get_wf_definition_spec_cache_size()) wf_spec = spec_parser.get_workflow_spec_by_definition_id( wfs[0].id, wfs[0].updated_at ) self.assertEqual(2, len(wf_spec.get_tasks())) self.assertEqual(2, spec_parser.get_wf_definition_spec_cache_size()) self.assertEqual(1, spec_parser.get_wf_execution_spec_cache_size()) # Now finally update execution cache and check that we can # get a valid spec by execution id. spec_parser.cache_workflow_spec_by_execution_id(wf_ex.id, wf_spec) wf_spec_by_exec_id = spec_parser.get_workflow_spec_by_execution_id( wf_ex.id ) self.assertEqual(2, len(wf_spec_by_exec_id.get_tasks())) mistral-6.0.0/mistral/tests/unit/test_command_dispatcher.py0000666000175100017510000000471713245513262024263 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.engine import dispatcher from mistral.tests.unit import base from mistral.workflow import commands def _print_commands(cmds): print("commands:") for cmd in cmds: if isinstance(cmd, commands.RunTask): print("%s, %s, %s" % (type(cmd), cmd.is_waiting(), cmd.unique_key)) else: print("%s" % type(cmd)) class CommandDispatcherTest(base.BaseTest): def test_rearrange_commands(self): no_wait = commands.RunTask(None, None, None, None) fail = commands.FailWorkflow(None, None, None, None) succeed = commands.SucceedWorkflow(None, None, None, None) wait1 = commands.RunTask(None, None, None, None) wait1.wait = True wait1.unique_key = 'wait1' wait2 = commands.RunTask(None, None, None, None) wait2.wait = True wait2.unique_key = 'wait2' wait3 = commands.RunTask(None, None, None, None) wait3.wait = True wait3.unique_key = 'wait3' # 'set state' command is the first, others must be ignored. initial = [fail, no_wait, wait1, wait3, wait2] expected = [fail] cmds = dispatcher._rearrange_commands(initial) self.assertEqual(expected, cmds) # 'set state' command is the last, tasks before it must be sorted. initial = [no_wait, wait2, wait1, wait3, succeed] expected = [no_wait, wait1, wait2, wait3, succeed] cmds = dispatcher._rearrange_commands(initial) self.assertEqual(expected, cmds) # 'set state' command is in the middle, tasks before it must be sorted # and the task after it must be ignored. initial = [wait3, wait2, no_wait, succeed, wait1] expected = [no_wait, wait2, wait3, succeed] cmds = dispatcher._rearrange_commands(initial) self.assertEqual(expected, cmds) mistral-6.0.0/mistral/tests/unit/base.py0000666000175100017510000002350113245513261020301 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import json import pkg_resources as pkg import sys import time import mock from oslo_config import cfg from oslo_log import log as logging from oslotest import base import testtools.matchers as ttm from mistral import config from mistral import context as auth_context from mistral.db.sqlalchemy import base as db_sa_base from mistral.db.sqlalchemy import sqlite_lock from mistral.db.v2 import api as db_api from mistral.lang import parser as spec_parser from mistral.services import action_manager from mistral.services import security from mistral.tests.unit import config as test_config from mistral.utils import inspect_utils as i_utils from mistral import version from mistral.workflow import lookup_utils RESOURCES_PATH = 'tests/resources/' LOG = logging.getLogger(__name__) test_config.parse_args() def get_resource(resource_name): return open(pkg.resource_filename( version.version_info.package, RESOURCES_PATH + resource_name)).read() def get_context(default=True, admin=False): if default: return auth_context.MistralContext.from_dict({ 'user_name': 'test-user', 'user': '1-2-3-4', 'tenant': security.DEFAULT_PROJECT_ID, 'project_name': 'test-project', 'is_admin': admin }) else: return auth_context.MistralContext.from_dict({ 'user_name': 'test-user', 'user': '9-0-44-5', 'tenant': '99-88-33', 'project_name': 'test-another', 'is_admin': admin }) def register_action_class(name, cls, attributes=None, desc=None): action_manager.register_action_class( name, '%s.%s' % (cls.__module__, cls.__name__), attributes or {}, input_str=i_utils.get_arg_list_as_str(cls.__init__) ) class FakeHTTPResponse(object): def __init__(self, text, status_code, reason=None, headers=None, history=None, encoding='utf-8', url='', cookies=None, elapsed=None): self.text = text self.content = text self.status_code = status_code self.reason = reason self.headers = headers or {} self.history = history self.encoding = encoding self.url = url self.cookies = cookies or {} self.elapsed = elapsed or datetime.timedelta(milliseconds=123) def json(self, **kwargs): return json.loads(self.text, **kwargs) class BaseTest(base.BaseTestCase): def setUp(self): super(BaseTest, self).setUp() self.addCleanup(spec_parser.clear_caches) def register_action_class(self, name, cls, attributes=None, desc=None): # Added for convenience (to avoid unnecessary imports). register_action_class(name, cls, attributes, desc) def assertListEqual(self, l1, l2): if tuple(sys.version_info)[0:2] < (2, 7): # for python 2.6 compatibility self.assertEqual(l1, l2) else: super(BaseTest, self).assertListEqual(l1, l2) def assertDictEqual(self, cmp1, cmp2): if tuple(sys.version_info)[0:2] < (2, 7): # for python 2.6 compatibility self.assertThat(cmp1, ttm.Equals(cmp2)) else: super(BaseTest, self).assertDictEqual(cmp1, cmp2) def _assert_single_item(self, items, **props): return self._assert_multiple_items(items, 1, **props)[0] def _assert_multiple_items(self, items, count, **props): def _matches(item, **props): for prop_name, prop_val in props.items(): v = item[prop_name] if isinstance( item, dict) else getattr(item, prop_name) if v != prop_val: return False return True filtered_items = list( [item for item in items if _matches(item, **props)] ) found = len(filtered_items) if found != count: LOG.info("[failed test ctx] items=%s, expected_props=%s", (str( items), props)) self.fail("Wrong number of items found [props=%s, " "expected=%s, found=%s]" % (props, count, found)) return filtered_items def _assert_dict_contains_subset(self, expected, actual, msg=None): """Checks whether actual is a superset of expected. Note: This is almost the exact copy of the standard method assertDictContainsSubset() that appeared in Python 2.7, it was added to use it with Python 2.6. """ missing = [] mismatched = [] for key, value in expected.items(): if key not in actual: missing.append(key) elif value != actual[key]: mismatched.append('%s, expected: %s, actual: %s' % (key, value, actual[key])) if not (missing or mismatched): return standardMsg = '' if missing: standardMsg = 'Missing: %s' % ','.join(m for m in missing) if mismatched: if standardMsg: standardMsg += '; ' standardMsg += 'Mismatched values: %s' % ','.join(mismatched) self.fail(self._formatMessage(msg, standardMsg)) def _await(self, predicate, delay=1, timeout=60, fail_message="no detail"): """Awaits for predicate function to evaluate to True. If within a configured timeout predicate function hasn't evaluated to True then an exception is raised. :param predicate: Predication function. :param delay: Delay in seconds between predicate function calls. :param timeout: Maximum amount of time to wait for predication function to evaluate to True. :param fail_message: explains what was expected :return: """ end_time = time.time() + timeout while True: if predicate(): break if time.time() + delay > end_time: raise AssertionError( "Failed to wait for expected result: " + fail_message ) time.sleep(delay) def _sleep(self, seconds): time.sleep(seconds) def override_config(self, name, override, group=None): """Cleanly override CONF variables.""" cfg.CONF.set_override(name, override, group) self.addCleanup(cfg.CONF.clear_override, name, group) class DbTestCase(BaseTest): is_heavy_init_called = False @classmethod def __heavy_init(cls): """Method that runs heavy_init(). Make this method private to prevent extending this one. It runs heavy_init() only once. Note: setUpClass() can be used, but it magically is not invoked from child class in another module. """ if not cls.is_heavy_init_called: cls.heavy_init() cls.is_heavy_init_called = True @classmethod def heavy_init(cls): """Runs a long initialization. This method runs long initialization once by class and can be extended by child classes. """ # If using sqlite, change to memory. The default is file based. if cfg.CONF.database.connection.startswith('sqlite'): cfg.CONF.set_default('connection', 'sqlite://', group='database') # This option is normally registered in sync_db.py so we have to # register it here specifically for tests. cfg.CONF.register_opt(config.os_actions_mapping_path) cfg.CONF.set_default( 'openstack_actions_mapping_path', 'tests/resources/openstack/test_mapping.json' ) cfg.CONF.set_default('max_overflow', -1, group='database') cfg.CONF.set_default('max_pool_size', 1000, group='database') db_api.setup_db() action_manager.sync_db() def _clean_db(self): lookup_utils.clear_caches() contexts = [ get_context(default=False), get_context(default=True) ] for ctx in contexts: auth_context.set_ctx(ctx) with mock.patch('mistral.services.security.get_project_id', new=mock.MagicMock(return_value=ctx.project_id)): with db_api.transaction(): db_api.delete_event_triggers() db_api.delete_cron_triggers() db_api.delete_workflow_executions() db_api.delete_task_executions() db_api.delete_action_executions() db_api.delete_workbooks() db_api.delete_workflow_definitions() db_api.delete_environments() db_api.delete_resource_members() db_api.delete_delayed_calls() sqlite_lock.cleanup() if not cfg.CONF.database.connection.startswith('sqlite'): db_sa_base.get_engine().dispose() def setUp(self): super(DbTestCase, self).setUp() self.__heavy_init() self.ctx = get_context() auth_context.set_ctx(self.ctx) self.addCleanup(auth_context.set_ctx, None) self.addCleanup(self._clean_db) def is_db_session_open(self): return db_sa_base._get_thread_local_session() is not None mistral-6.0.0/mistral/tests/unit/test_expressions.py0000666000175100017510000001305513245513262023014 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral import exceptions as exc from mistral import expressions as expr from mistral.tests.unit import base DATA = { "server": { "id": "03ea824a-aa24-4105-9131-66c48ae54acf", "name": "cloud-fedora", "status": "ACTIVE" }, "status": "OK" } SERVERS = { "servers": [ {'name': 'centos'}, {'name': 'ubuntu'}, {'name': 'fedora'} ] } class ExpressionsTest(base.BaseTest): def test_evaluate_complex_expressions(self): data = { 'a': 1, 'b': 2, 'c': 3, 'd': True, 'e': False, 'f': 10.1, 'g': 10, 'h': [1, 2, 3, 4, 5], 'i': 'We are OpenStack!', 'j': 'World', 'k': 'Mistral', 'l': 'awesome', 'm': 'the way we roll' } test_cases = [ ('<% $.a + $.b * $.c %>', 7), ('<%($.a + $.b) * $.c %>', 9), ('<% $.d and $.e %>', False), ('<% $.f > $.g %>', True), ('<% $.h.len() >= 5 %>', True), ('<% $.h.len() >= $.b + $.c %>', True), ('<% 100 in $.h %>', False), ('<% $.a in $.h%>', True), ('<% ''OpenStack'' in $.i %>', True), ('Hello, <% $.j %>!', 'Hello, World!'), ('<% $.k %> is <% $.l %>!', 'Mistral is awesome!'), ('This is <% $.m %>.', 'This is the way we roll.'), ('<% 1 + 1 = 3 %>', False) ] for expression, expected in test_cases: actual = expr.evaluate_recursively(expression, data) self.assertEqual(expected, actual) def test_evaluate_recursively(self): task_spec_dict = { 'parameters': { 'p1': 'My string', 'p2': '<% $.param2 %>', 'p3': '' }, 'publish': { 'new_key11': 'new_key1' } } modified_task = expr.evaluate_recursively( task_spec_dict, {'param2': 'val32'} ) self.assertDictEqual( { 'parameters': { 'p1': 'My string', 'p2': 'val32', 'p3': '' }, 'publish': { 'new_key11': 'new_key1' } }, modified_task ) def test_evaluate_recursively_arbitrary_dict(self): context = { "auth_token": "123", "project_id": "mistral" } data = { "parameters": { "parameter1": { "name1": "<% $.auth_token %>", "name2": "val_name2" }, "param2": [ "var1", "var2", "/servers/<% $.project_id %>/bla" ] }, "token": "<% $.auth_token %>" } applied = expr.evaluate_recursively(data, context) self.assertDictEqual( { "parameters": { "parameter1": { "name1": "123", "name2": "val_name2" }, "param2": ["var1", "var2", "/servers/mistral/bla"] }, "token": "123" }, applied ) def test_evaluate_recursively_environment(self): environment = { 'host': 'vm1234.example.com', 'db': 'test', 'timeout': 600, 'verbose': True, '__actions': { 'std.sql': { 'conn': 'mysql://admin:secret@<% env().host %>' '/<% env().db %>' } } } context = { '__env': environment } defaults = context['__env']['__actions']['std.sql'] applied = expr.evaluate_recursively(defaults, context) expected = 'mysql://admin:secret@vm1234.example.com/test' self.assertEqual(expected, applied['conn']) def test_validate_jinja_with_yaql_context(self): self.assertRaises(exc.JinjaGrammarException, expr.validate, '{{ $ }}') def test_validate_mixing_jinja_and_yaql(self): self.assertRaises(exc.ExpressionGrammarException, expr.validate, '<% $.a %>{{ _.a }}') self.assertRaises(exc.ExpressionGrammarException, expr.validate, '{{ _.a }}<% $.a %>') def test_evaluate_mixing_jinja_and_yaql(self): actual = expr.evaluate('<% $.a %>{{ _.a }}', {'a': 'b'}) self.assertEqual('<% $.a %>b', actual) actual = expr.evaluate('{{ _.a }}<% $.a %>', {'a': 'b'}) self.assertEqual('b<% $.a %>', actual) mistral-6.0.0/mistral/tests/unit/__init__.py0000666000175100017510000000144613245513261021132 0ustar zuulzuul00000000000000# Copyright 2016 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import eventlet eventlet.monkey_patch( os=True, select=True, socket=True, thread=False if '--use-debugger' in sys.argv else True, time=True ) mistral-6.0.0/mistral/tests/unit/test_serialization.py0000666000175100017510000000667313245513262023317 0ustar zuulzuul00000000000000# Copyright 2017 - Nokia Networks. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral import serialization from mistral.tests.unit import base class MyClass(serialization.MistralSerializable): def __init__(self, a, b): self.a = a self.b = b def __eq__(self, other): if not isinstance(other, MyClass): return False return other.a == self.a and other.b == self.b class MyClassSerializer(serialization.DictBasedSerializer): def serialize_to_dict(self, entity): return {'a': entity.a, 'b': entity.b} def deserialize_from_dict(self, entity_dict): return MyClass(entity_dict['a'], entity_dict['b']) class SerializationTest(base.BaseTest): def setUp(self): super(SerializationTest, self).setUp() serialization.register_serializer(MyClass, MyClassSerializer()) self.addCleanup(serialization.unregister_serializer, MyClass) def test_dict_based_serializer(self): obj = MyClass('a', 'b') serializer = MyClassSerializer() s = serializer.serialize(obj) self.assertEqual(obj, serializer.deserialize(s)) self.assertIsNone(serializer.serialize(None)) self.assertIsNone(serializer.deserialize(None)) def test_polymorphic_serializer_primitive_types(self): serializer = serialization.get_polymorphic_serializer() self.assertEqual(17, serializer.deserialize(serializer.serialize(17))) self.assertEqual( 0.34, serializer.deserialize(serializer.serialize(0.34)) ) self.assertEqual(-5, serializer.deserialize(serializer.serialize(-5))) self.assertEqual( -6.3, serializer.deserialize(serializer.serialize(-6.3)) ) self.assertFalse(serializer.deserialize(serializer.serialize(False))) self.assertTrue(serializer.deserialize(serializer.serialize(True))) self.assertEqual( 'abc', serializer.deserialize(serializer.serialize('abc')) ) self.assertEqual( {'a': 'b', 'c': 'd'}, serializer.deserialize(serializer.serialize({'a': 'b', 'c': 'd'})) ) self.assertEqual( ['a', 'b', 'c'], serializer.deserialize(serializer.serialize(['a', 'b', 'c'])) ) def test_polymorphic_serializer_custom_object(self): serializer = serialization.get_polymorphic_serializer() obj = MyClass('a', 'b') s = serializer.serialize(obj) self.assertIn('__serial_key', s) self.assertIn('__serial_data', s) self.assertEqual(obj, serializer.deserialize(s)) self.assertIsNone(serializer.serialize(None)) self.assertIsNone(serializer.deserialize(None)) def test_register_twice(self): self.assertRaises( RuntimeError, serialization.register_serializer, MyClass, MyClassSerializer() ) mistral-6.0.0/mistral/tests/unit/services/0000775000175100017510000000000013245513604020636 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/services/test_action_manager.py0000666000175100017510000000347413245513262025230 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from mistral.db.v2 import api as db_api from mistral.tests.unit import base class ActionManagerTest(base.DbTestCase): def test_action_input(self): std_http = db_api.get_action_definition("std.http") std_email = db_api.get_action_definition("std.email") http_action_input = ( 'url, method="GET", params=null, body=null, ' 'headers=null, cookies=null, auth=null, ' 'timeout=null, allow_redirects=null, ' 'proxies=null, verify=null' ) self.assertEqual(http_action_input, std_http.input) std_email_input = ( "from_addr, to_addrs, smtp_server, " "smtp_password=null, subject=null, body=null" ) self.assertEqual(std_email_input, std_email.input) def test_action_description(self): std_http = db_api.get_action_definition("std.http") std_echo = db_api.get_action_definition("std.echo") self.assertIn("Constructs an HTTP action", std_http.description) self.assertIn("param body: (optional) Dictionary, bytes", std_http.description) self.assertIn("This action just returns a configured value", std_echo.description) mistral-6.0.0/mistral/tests/unit/services/test_action_service.py0000666000175100017510000000726513245513262025260 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.lang import parser as spec_parser from mistral.services import actions as action_service from mistral.tests.unit import base from mistral import utils # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') ACTION_LIST = """ --- version: '2.0' action1: tags: [test, v2] base: std.echo output='Hi' output: result: $ action2: base: std.echo output='Hey' output: result: $ """ UPDATED_ACTION_LIST = """ --- version: '2.0' action1: base: std.echo output='Hi' input: - param1 output: result: $ """ class ActionServiceTest(base.DbTestCase): def setUp(self): super(ActionServiceTest, self).setUp() self.addCleanup(db_api.delete_action_definitions, name='action1') self.addCleanup(db_api.delete_action_definitions, name='action2') def test_create_actions(self): db_actions = action_service.create_actions(ACTION_LIST) self.assertEqual(2, len(db_actions)) # Action 1. action1_db = self._assert_single_item(db_actions, name='action1') action1_spec = spec_parser.get_action_spec(action1_db.spec) self.assertEqual('action1', action1_spec.get_name()) self.assertListEqual(['test', 'v2'], action1_spec.get_tags()) self.assertEqual('std.echo', action1_spec.get_base()) self.assertDictEqual({'output': 'Hi'}, action1_spec.get_base_input()) # Action 2. action2_db = self._assert_single_item(db_actions, name='action2') action2_spec = spec_parser.get_action_spec(action2_db.spec) self.assertEqual('action2', action2_spec.get_name()) self.assertEqual('std.echo', action1_spec.get_base()) self.assertDictEqual({'output': 'Hey'}, action2_spec.get_base_input()) def test_update_actions(self): db_actions = action_service.create_actions(ACTION_LIST) self.assertEqual(2, len(db_actions)) action1_db = self._assert_single_item(db_actions, name='action1') action1_spec = spec_parser.get_action_spec(action1_db.spec) self.assertEqual('action1', action1_spec.get_name()) self.assertEqual('std.echo', action1_spec.get_base()) self.assertDictEqual({'output': 'Hi'}, action1_spec.get_base_input()) self.assertDictEqual({}, action1_spec.get_input()) db_actions = action_service.update_actions(UPDATED_ACTION_LIST) # Action 1. action1_db = self._assert_single_item(db_actions, name='action1') action1_spec = spec_parser.get_action_spec(action1_db.spec) self.assertEqual('action1', action1_spec.get_name()) self.assertListEqual([], action1_spec.get_tags()) self.assertEqual('std.echo', action1_spec.get_base()) self.assertDictEqual({'output': 'Hi'}, action1_spec.get_base_input()) self.assertIn('param1', action1_spec.get_input()) self.assertIs( action1_spec.get_input().get('param1'), utils.NotDefined ) mistral-6.0.0/mistral/tests/unit/services/test_scheduler.py0000666000175100017510000002416513245513262024237 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import eventlet import mock from eventlet import queue from eventlet import timeout from mistral import context as auth_context from mistral.db.v2 import api as db_api from mistral import exceptions as exc from mistral.services import scheduler from mistral.tests.unit import base from mistral_lib import actions as ml_actions TARGET_METHOD_PATH = ( 'mistral.tests.unit.services.test_scheduler.target_method' ) DELAY = 1.5 def get_time_delay(delay=DELAY): return datetime.datetime.now() + datetime.timedelta(seconds=delay) def target_method(): pass class SchedulerServiceTest(base.DbTestCase): def setUp(self): super(SchedulerServiceTest, self).setUp() self.timeout = timeout.Timeout(seconds=10) self.queue = queue.Queue() self.scheduler = scheduler.Scheduler(0, 1, None) self.scheduler.start() self.addCleanup(self.scheduler.stop, True) self.addCleanup(self.timeout.cancel) def target_method(self, *args, **kwargs): self.queue.put(item="item") def target_check_context_method(self, expected_project_id): actual_project_id = auth_context.ctx().project_id self.queue.put(item=(expected_project_id == actual_project_id)) @mock.patch(TARGET_METHOD_PATH) def test_scheduler_with_factory(self, factory): target_method_name = 'run_something' factory.return_value = type( 'something', (object,), { target_method_name: mock.MagicMock(side_effect=self.target_method) } ) scheduler.schedule_call( TARGET_METHOD_PATH, target_method_name, DELAY, **{'name': 'task', 'id': '123'} ) calls = db_api.get_delayed_calls_to_start(get_time_delay()) call = self._assert_single_item( calls, target_method_name=target_method_name ) self.assertIn('name', call['method_arguments']) self.queue.get() factory().run_something.assert_called_once_with(name='task', id='123') calls = db_api.get_delayed_calls_to_start(get_time_delay()) self.assertEqual(0, len(calls)) @mock.patch(TARGET_METHOD_PATH) def test_scheduler_without_factory(self, method): method.side_effect = self.target_method scheduler.schedule_call( None, TARGET_METHOD_PATH, DELAY, **{'name': 'task', 'id': '321'} ) calls = db_api.get_delayed_calls_to_start(get_time_delay()) call = self._assert_single_item( calls, target_method_name=TARGET_METHOD_PATH ) self.assertIn('name', call['method_arguments']) self.queue.get() method.assert_called_once_with(name='task', id='321') calls = db_api.get_delayed_calls_to_start(get_time_delay()) self.assertEqual(0, len(calls)) @mock.patch(TARGET_METHOD_PATH) def test_scheduler_call_target_method_with_correct_auth(self, method): method.side_effect = self.target_check_context_method default_context = base.get_context(default=True) auth_context.set_ctx(default_context) default_project_id = ( default_context.project_id ) scheduler.schedule_call( None, TARGET_METHOD_PATH, DELAY, **{'expected_project_id': default_project_id} ) second_context = base.get_context(default=False) auth_context.set_ctx(second_context) second_project_id = ( second_context.project_id ) scheduler.schedule_call( None, TARGET_METHOD_PATH, DELAY, **{'expected_project_id': second_project_id} ) self.assertNotEqual(default_project_id, second_project_id) for _ in range(2): self.assertTrue(self.queue.get()) @mock.patch(TARGET_METHOD_PATH) def test_scheduler_with_serializer(self, factory): target_method_name = 'run_something' factory.return_value = type( 'something', (object,), { target_method_name: mock.MagicMock(side_effect=self.target_method) } ) task_result = ml_actions.Result('data', 'error') method_args = { 'name': 'task', 'id': '123', 'result': task_result } serializers = { 'result': 'mistral.workflow.utils.ResultSerializer' } scheduler.schedule_call( TARGET_METHOD_PATH, target_method_name, DELAY, serializers=serializers, **method_args ) calls = db_api.get_delayed_calls_to_start(get_time_delay()) call = self._assert_single_item( calls, target_method_name=target_method_name ) self.assertIn('name', call['method_arguments']) self.queue.get() result = factory().run_something.call_args[1].get('result') self.assertIsInstance(result, ml_actions.Result) self.assertEqual('data', result.data) self.assertEqual('error', result.error) calls = db_api.get_delayed_calls_to_start(get_time_delay()) self.assertEqual(0, len(calls)) @mock.patch(TARGET_METHOD_PATH) def test_scheduler_multi_instance(self, method): method.side_effect = self.target_method second_scheduler = scheduler.Scheduler(1, 1, None) second_scheduler.start() self.addCleanup(second_scheduler.stop, True) scheduler.schedule_call( None, TARGET_METHOD_PATH, DELAY, **{'name': 'task', 'id': '321'} ) calls = db_api.get_delayed_calls_to_start(get_time_delay()) self._assert_single_item(calls, target_method_name=TARGET_METHOD_PATH) self.queue.get() method.assert_called_once_with(name='task', id='321') calls = db_api.get_delayed_calls_to_start(get_time_delay()) self.assertEqual(0, len(calls)) @mock.patch(TARGET_METHOD_PATH) def test_scheduler_delete_calls(self, method): method.side_effect = self.target_method scheduler.schedule_call( None, TARGET_METHOD_PATH, DELAY, **{'name': 'task', 'id': '321'} ) calls = db_api.get_delayed_calls_to_start(get_time_delay()) self._assert_single_item(calls, target_method_name=TARGET_METHOD_PATH) self.queue.get() self.assertRaises( exc.DBEntityNotFoundError, db_api.get_delayed_call, calls[0].id ) @mock.patch(TARGET_METHOD_PATH) def test_processing_true_does_not_return_in_get_delayed_calls_to_start( self, method): method.side_effect = self.target_method values = { 'factory_method_path': None, 'target_method_name': TARGET_METHOD_PATH, 'execution_time': get_time_delay(), 'auth_context': None, 'serializers': None, 'method_arguments': None, 'processing': True } call = db_api.create_delayed_call(values) calls = db_api.get_delayed_calls_to_start(get_time_delay(10)) self.assertEqual(0, len(calls)) db_api.delete_delayed_call(call.id) @mock.patch.object(db_api, 'update_delayed_call') def test_scheduler_doesnt_handle_calls_the_failed_on_update( self, update_delayed_call): def update_call_failed(id, values, query_filter): self.queue.put("item") return None, 0 update_delayed_call.side_effect = update_call_failed scheduler.schedule_call( None, TARGET_METHOD_PATH, DELAY, **{'name': 'task', 'id': '321'} ) calls = db_api.get_delayed_calls_to_start(get_time_delay()) self.queue.get() eventlet.sleep(1) update_delayed_call.assert_called_with( id=calls[0].id, values=mock.ANY, query_filter=mock.ANY ) # If the scheduler does handel calls that failed on update # DBEntityNotFoundException will raise. db_api.get_delayed_call(calls[0].id) db_api.delete_delayed_call(calls[0].id) def test_scheduler_with_custom_batch_size(self): self.scheduler.stop() number_delayed_calls = 5 processed_calls_at_time = [] real_delete_calls_method = scheduler.Scheduler.delete_calls @staticmethod def delete_calls_counter(delayed_calls): real_delete_calls_method(delayed_calls) for _ in range(len(delayed_calls)): self.queue.put("item") processed_calls_at_time.append(len(delayed_calls)) scheduler.Scheduler.delete_calls = delete_calls_counter # Create 5 delayed calls for i in range(number_delayed_calls): scheduler.schedule_call( None, TARGET_METHOD_PATH, 0, **{'name': 'task', 'id': i} ) # Start scheduler which process 2 calls at a time self.scheduler = scheduler.Scheduler(0, 1, 2) self.scheduler.start() # Wait when all of calls will be processed for _ in range(number_delayed_calls): self.queue.get() self.assertEqual( [2, 2, 1], processed_calls_at_time ) mistral-6.0.0/mistral/tests/unit/services/test_workbook_service.py0000666000175100017510000001431513245513262025632 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from mistral.db.v2 import api as db_api from mistral.lang import parser as spec_parser from mistral.services import workbooks as wb_service from mistral.tests.unit import base # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKBOOK = """ --- version: '2.0' name: my_wb tags: [test] actions: concat: base: std.echo base-input: output: "{$.str1}{$.str2}" workflows: wf1: #Sample Comment 1 type: reverse tags: [wf_test] input: - param1 output: result: "{$.result}" tasks: task1: action: std.echo output="{$.param1}" publish: result: "{$}" wf2: type: direct output: result: "{$.result}" tasks: task1: workflow: my_wb.wf1 param1='Hi' task_name='task1' publish: result: "The result of subworkflow is '{$.final_result}'" """ WORKBOOK_WF1_DEFINITION = """wf1: #Sample Comment 1 type: reverse tags: [wf_test] input: - param1 output: result: "{$.result}" tasks: task1: action: std.echo output="{$.param1}" publish: result: "{$}" """ WORKBOOK_WF2_DEFINITION = """wf2: type: direct output: result: "{$.result}" tasks: task1: workflow: my_wb.wf1 param1='Hi' task_name='task1' publish: result: "The result of subworkflow is '{$.final_result}'" """ UPDATED_WORKBOOK = """ --- version: '2.0' name: my_wb tags: [test] actions: concat: base: std.echo base-input: output: "{$.str1}{$.str2}" workflows: wf1: type: direct output: result: "{$.result}" tasks: task1: workflow: my_wb.wf2 param1='Hi' task_name='task1' publish: result: "The result of subworkflow is '{$.final_result}'" wf2: type: reverse input: - param1 output: result: "{$.result}" tasks: task1: action: std.echo output="{$.param1}" publish: result: "{$}" """ UPDATED_WORKBOOK_WF1_DEFINITION = """wf1: type: direct output: result: "{$.result}" tasks: task1: workflow: my_wb.wf2 param1='Hi' task_name='task1' publish: result: "The result of subworkflow is '{$.final_result}'" """ UPDATED_WORKBOOK_WF2_DEFINITION = """wf2: type: reverse input: - param1 output: result: "{$.result}" tasks: task1: action: std.echo output="{$.param1}" publish: result: "{$}" """ ACTION_DEFINITION = """concat: base: std.echo base-input: output: "{$.str1}{$.str2}" """ class WorkbookServiceTest(base.DbTestCase): def test_create_workbook(self): wb_db = wb_service.create_workbook_v2(WORKBOOK) self.assertIsNotNone(wb_db) self.assertEqual('my_wb', wb_db.name) self.assertEqual(WORKBOOK, wb_db.definition) self.assertIsNotNone(wb_db.spec) self.assertListEqual(['test'], wb_db.tags) db_actions = db_api.get_action_definitions(name='my_wb.concat') self.assertEqual(1, len(db_actions)) # Action. action_db = self._assert_single_item(db_actions, name='my_wb.concat') self.assertFalse(action_db.is_system) action_spec = spec_parser.get_action_spec(action_db.spec) self.assertEqual('concat', action_spec.get_name()) self.assertEqual('std.echo', action_spec.get_base()) self.assertEqual(ACTION_DEFINITION, action_db.definition) db_wfs = db_api.get_workflow_definitions() self.assertEqual(2, len(db_wfs)) # Workflow 1. wf1_db = self._assert_single_item(db_wfs, name='my_wb.wf1') wf1_spec = spec_parser.get_workflow_spec(wf1_db.spec) self.assertEqual('wf1', wf1_spec.get_name()) self.assertEqual('reverse', wf1_spec.get_type()) self.assertListEqual(['wf_test'], wf1_spec.get_tags()) self.assertListEqual(['wf_test'], wf1_db.tags) self.assertEqual(WORKBOOK_WF1_DEFINITION, wf1_db.definition) # Workflow 2. wf2_db = self._assert_single_item(db_wfs, name='my_wb.wf2') wf2_spec = spec_parser.get_workflow_spec(wf2_db.spec) self.assertEqual('wf2', wf2_spec.get_name()) self.assertEqual('direct', wf2_spec.get_type()) self.assertEqual(WORKBOOK_WF2_DEFINITION, wf2_db.definition) def test_update_workbook(self): # Create workbook. wb_db = wb_service.create_workbook_v2(WORKBOOK) self.assertIsNotNone(wb_db) self.assertEqual(2, len(db_api.get_workflow_definitions())) # Update workbook. wb_db = wb_service.update_workbook_v2(UPDATED_WORKBOOK) self.assertIsNotNone(wb_db) self.assertEqual('my_wb', wb_db.name) self.assertEqual(UPDATED_WORKBOOK, wb_db.definition) self.assertListEqual(['test'], wb_db.tags) db_wfs = db_api.get_workflow_definitions() self.assertEqual(2, len(db_wfs)) # Workflow 1. wf1_db = self._assert_single_item(db_wfs, name='my_wb.wf1') wf1_spec = spec_parser.get_workflow_spec(wf1_db.spec) self.assertEqual('wf1', wf1_spec.get_name()) self.assertEqual('direct', wf1_spec.get_type()) self.assertEqual(UPDATED_WORKBOOK_WF1_DEFINITION, wf1_db.definition) # Workflow 2. wf2_db = self._assert_single_item(db_wfs, name='my_wb.wf2') wf2_spec = spec_parser.get_workflow_spec(wf2_db.spec) self.assertEqual('wf2', wf2_spec.get_name()) self.assertEqual('reverse', wf2_spec.get_type()) self.assertEqual(UPDATED_WORKBOOK_WF2_DEFINITION, wf2_db.definition) mistral-6.0.0/mistral/tests/unit/services/test_trigger_service.py0000666000175100017510000002453113245513262025441 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import eventlet import mock from oslo_config import cfg from mistral import exceptions as exc from mistral.rpc import clients as rpc from mistral.services import periodic from mistral.services import security from mistral.services import triggers as t_s from mistral.services import workflows from mistral.tests.unit import base from mistral import utils # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKFLOW_LIST = """ --- version: '2.0' my_wf: type: direct tasks: task1: action: std.echo output='Hi!' """ advance_cron_trigger_orig = periodic.advance_cron_trigger def new_advance_cron_trigger(ct): """Wrap the original advance_cron_trigger method. This method makes sure that the other coroutines will also run while this thread is executing. Without explicitly passing control to another coroutine the process_cron_triggers_v2 will finish looping over all the cron triggers in one coroutine without any sharing at all. """ eventlet.sleep() modified = advance_cron_trigger_orig(ct) eventlet.sleep() return modified class TriggerServiceV2Test(base.DbTestCase): def setUp(self): super(TriggerServiceV2Test, self).setUp() self.wf = workflows.create_workflows(WORKFLOW_LIST)[0] def test_trigger_create(self): trigger = t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, '*/5 * * * *', None, None, datetime.datetime(2010, 8, 25) ) self.assertEqual( datetime.datetime(2010, 8, 25, 0, 5), trigger.next_execution_time ) next_time = t_s.get_next_execution_time( trigger['pattern'], trigger.next_execution_time ) self.assertEqual(datetime.datetime(2010, 8, 25, 0, 10), next_time) def test_trigger_create_with_wf_id(self): trigger = t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), None, {}, {}, '*/5 * * * *', None, None, datetime.datetime(2010, 8, 25), workflow_id=self.wf.id ) self.assertEqual(self.wf.name, trigger.workflow_name) def test_trigger_create_the_same_first_time_or_count(self): t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, '*/5 * * * *', "4242-12-25 13:37", 2, datetime.datetime(2010, 8, 25) ) t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, '*/5 * * * *', "4242-12-25 13:37", 4, datetime.datetime(2010, 8, 25) ) t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, '*/5 * * * *', "5353-12-25 13:37", 2, datetime.datetime(2010, 8, 25) ) # Creations above should be ok. # But creation with the same count and first time # simultaneously leads to error. self.assertRaises( exc.DBDuplicateEntryError, t_s.create_cron_trigger, 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, '*/5 * * * *', "4242-12-25 13:37", 2, None ) def test_trigger_create_wrong_workflow_input(self): wf_with_input = """--- version: '2.0' some_wf: input: - some_var tasks: some_task: action: std.echo output=<% $.some_var %> """ workflows.create_workflows(wf_with_input) exception = self.assertRaises( exc.InputException, t_s.create_cron_trigger, 'trigger-%s' % utils.generate_unicode_uuid(), 'some_wf', {}, {}, '*/5 * * * *', None, None, datetime.datetime(2010, 8, 25) ) self.assertIn('Invalid input', str(exception)) self.assertIn('some_wf', str(exception)) def test_oneshot_trigger_create(self): trigger = t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, None, "4242-12-25 13:37", None, datetime.datetime(2010, 8, 25) ) self.assertEqual( datetime.datetime(4242, 12, 25, 13, 37), trigger.next_execution_time ) @mock.patch.object(security, 'create_trust', type('trust', (object,), {'id': 'my_trust_id'})) def test_create_trust_in_trigger(self): cfg.CONF.set_default('auth_enable', True, group='pecan') self.addCleanup( cfg.CONF.set_default, 'auth_enable', False, group='pecan' ) trigger = t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, '*/2 * * * *', None, None, datetime.datetime(2010, 8, 25) ) self.assertEqual('my_trust_id', trigger.trust_id) @mock.patch.object(security, 'create_trust', type('trust', (object,), {'id': 'my_trust_id'})) @mock.patch.object(security, 'create_context') @mock.patch.object(rpc.EngineClient, 'start_workflow', mock.Mock()) @mock.patch( 'mistral.services.periodic.advance_cron_trigger', mock.MagicMock(side_effect=new_advance_cron_trigger) ) @mock.patch.object(security, 'delete_trust') def test_create_delete_trust_in_trigger(self, delete_trust, create_ctx): create_ctx.return_value = self.ctx cfg.CONF.set_default('auth_enable', True, group='pecan') trigger_thread = periodic.setup() self.addCleanup(trigger_thread.stop) self.addCleanup( cfg.CONF.set_default, 'auth_enable', False, group='pecan' ) t_s.create_cron_trigger( 'trigger-%s' % utils.generate_unicode_uuid(), self.wf.name, {}, {}, '* * * * * *', None, 1, datetime.datetime(2010, 8, 25) ) eventlet.sleep(1) self.assertEqual(0, delete_trust.call_count) def test_get_trigger_in_correct_orders(self): t1_name = 'trigger-%s' % utils.generate_unicode_uuid() t_s.create_cron_trigger( t1_name, self.wf.name, {}, pattern='*/5 * * * *', start_time=datetime.datetime(2010, 8, 25) ) t2_name = 'trigger-%s' % utils.generate_unicode_uuid() t_s.create_cron_trigger( t2_name, self.wf.name, {}, pattern='*/1 * * * *', start_time=datetime.datetime(2010, 8, 22) ) t3_name = 'trigger-%s' % utils.generate_unicode_uuid() t_s.create_cron_trigger( t3_name, self.wf.name, {}, pattern='*/2 * * * *', start_time=datetime.datetime(2010, 9, 21) ) t4_name = 'trigger-%s' % utils.generate_unicode_uuid() t_s.create_cron_trigger( t4_name, self.wf.name, {}, pattern='*/3 * * * *', start_time=datetime.datetime.utcnow() + datetime.timedelta(0, 50) ) trigger_names = [t.name for t in t_s.get_next_cron_triggers()] self.assertEqual([t2_name, t1_name, t3_name], trigger_names) @mock.patch( 'mistral.services.periodic.advance_cron_trigger', mock.MagicMock(side_effect=new_advance_cron_trigger) ) @mock.patch.object(rpc.EngineClient, 'start_workflow') def test_single_execution_with_multiple_processes(self, start_wf_mock): def stop_thread_groups(): print('Killing cron trigger threads...') [tg.stop() for tg in self.trigger_threads] self.trigger_threads = [ periodic.setup(), periodic.setup(), periodic.setup() ] self.addCleanup(stop_thread_groups) trigger_count = 5 t_s.create_cron_trigger( 'ct1', self.wf.name, {}, {}, '* * * * * */1', # Every second None, trigger_count, datetime.datetime(2010, 8, 25) ) # Wait until there are 'trigger_count' executions. self._await( lambda: self._wait_for_single_execution_with_multiple_processes( trigger_count, start_wf_mock ) ) # Wait some more and make sure there are no more than 'trigger_count' # executions. eventlet.sleep(5) self.assertEqual(trigger_count, start_wf_mock.call_count) def _wait_for_single_execution_with_multiple_processes(self, trigger_count, start_wf_mock): eventlet.sleep(1) return trigger_count == start_wf_mock.call_count def test_get_next_execution_time(self): pattern = '*/20 * * * *' start_time = datetime.datetime(2016, 3, 22, 23, 40) result = t_s.get_next_execution_time(pattern, start_time) self.assertEqual(result, datetime.datetime(2016, 3, 23, 0, 0)) mistral-6.0.0/mistral/tests/unit/services/__init__.py0000666000175100017510000000000013245513262022737 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/services/test_workflow_service.py0000666000175100017510000001767513245513262025663 0ustar zuulzuul00000000000000# Copyright 2014 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy from oslo_config import cfg from mistral.db.v2.sqlalchemy import api as db_api from mistral import exceptions as exc from mistral.lang import parser as spec_parser import mistral.lang.v2.tasks as tasks from mistral.services import workflows as wf_service from mistral.tests.unit import base from mistral import utils from mistral.workflow import states # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') WORKFLOW_LIST = """ --- version: '2.0' wf1: tags: [test, v2] type: reverse input: - param1 output: result: "{$.result}" tasks: task1: action: std.echo output="{$.param1}" publish: result: "{$}" wf2: type: direct output: result: "{$.result}" tasks: task1: workflow: my_wb.wf1 param1='Hi' task_name='task1' publish: result: "The result of subworkflow is '{$.final_result}'" """ UPDATED_WORKFLOW_LIST = """ --- version: '2.0' wf1: type: reverse input: - param1 - param2 output: result: "{$.result}" tasks: task1: action: std.echo output="{$.param1}{$.param2}" publish: result: "{$}" """ WORKFLOW_WITH_VAR_TASK_NAME = """ --- version: '2.0' list_servers: tasks: {task_name}: action: nova.servers_list """ WORKFLOW = WORKFLOW_WITH_VAR_TASK_NAME.format(task_name='task1') INVALID_WORKFLOW = """ --- verstion: '2.0' wf: type: direct tasks: task1: action: std.echo output="Task 1" """ class WorkflowServiceTest(base.DbTestCase): def test_create_workflows(self): db_wfs = wf_service.create_workflows(WORKFLOW_LIST) self.assertEqual(2, len(db_wfs)) # Workflow 1. wf1_db = self._assert_single_item(db_wfs, name='wf1') wf1_spec = spec_parser.get_workflow_spec(wf1_db.spec) self.assertEqual('wf1', wf1_spec.get_name()) self.assertListEqual(['test', 'v2'], wf1_spec.get_tags()) self.assertEqual('reverse', wf1_spec.get_type()) # Workflow 2. wf2_db = self._assert_single_item(db_wfs, name='wf2') wf2_spec = spec_parser.get_workflow_spec(wf2_db.spec) self.assertEqual('wf2', wf2_spec.get_name()) self.assertEqual('direct', wf2_spec.get_type()) def test_invalid_task_name(self): for name in tasks.RESERVED_TASK_NAMES: wf = WORKFLOW_WITH_VAR_TASK_NAME.format(task_name=name) self.assertRaises( exc.InvalidModelException, wf_service.create_workflows, wf ) def test_update_workflows(self): db_wfs = wf_service.create_workflows(WORKFLOW_LIST) self.assertEqual(2, len(db_wfs)) # Workflow 1. wf1_db = self._assert_single_item(db_wfs, name='wf1') wf1_spec = spec_parser.get_workflow_spec(wf1_db.spec) self.assertEqual('wf1', wf1_spec.get_name()) self.assertEqual('reverse', wf1_spec.get_type()) self.assertIn('param1', wf1_spec.get_input()) self.assertIs( wf1_spec.get_input().get('param1'), utils.NotDefined ) db_wfs = wf_service.update_workflows(UPDATED_WORKFLOW_LIST) self.assertEqual(1, len(db_wfs)) wf1_db = self._assert_single_item(db_wfs, name='wf1') wf1_spec = spec_parser.get_workflow_spec(wf1_db.spec) self.assertEqual('wf1', wf1_spec.get_name()) self.assertListEqual([], wf1_spec.get_tags()) self.assertEqual('reverse', wf1_spec.get_type()) self.assertIn('param1', wf1_spec.get_input()) self.assertIn('param2', wf1_spec.get_input()) self.assertIs( wf1_spec.get_input().get('param1'), utils.NotDefined ) self.assertIs( wf1_spec.get_input().get('param2'), utils.NotDefined ) def test_update_non_existing_workflow_failed(self): exception = self.assertRaises( exc.DBEntityNotFoundError, wf_service.update_workflows, WORKFLOW ) self.assertIn("Workflow not found", str(exception)) def test_invalid_workflow_list(self): exception = self.assertRaises( exc.InvalidModelException, wf_service.create_workflows, INVALID_WORKFLOW ) self.assertIn("Invalid DSL", str(exception)) def test_update_workflow_execution_env(self): wf_exec_template = { 'spec': {}, 'start_params': {'task': 'my_task1'}, 'state': 'PAUSED', 'state_info': None, 'params': {'env': {'k1': 'abc'}}, 'created_at': None, 'updated_at': None, 'context': {'__env': {'k1': 'fee fi fo fum'}}, 'task_id': None, 'trust_id': None, 'description': None, 'output': None } states_permitted = [ states.IDLE, states.PAUSED, states.ERROR ] update_env = {'k1': 'foobar'} for state in states_permitted: wf_exec = copy.deepcopy(wf_exec_template) wf_exec['state'] = state with db_api.transaction(): created = db_api.create_workflow_execution(wf_exec) self.assertIsNone(created.updated_at) updated = wf_service.update_workflow_execution_env( created, update_env ) self.assertDictEqual(update_env, updated.params['env']) self.assertDictEqual(update_env, updated.context['__env']) fetched = db_api.get_workflow_execution(created.id) self.assertEqual(updated, fetched) self.assertIsNotNone(fetched.updated_at) def test_update_workflow_execution_env_wrong_state(self): wf_exec_template = { 'spec': {}, 'start_params': {'task': 'my_task1'}, 'state': 'PAUSED', 'state_info': None, 'params': {'env': {'k1': 'abc'}}, 'created_at': None, 'updated_at': None, 'context': {'__env': {'k1': 'fee fi fo fum'}}, 'task_id': None, 'trust_id': None, 'description': None, 'output': None } states_not_permitted = [ states.RUNNING, states.RUNNING_DELAYED, states.SUCCESS, states.WAITING ] update_env = {'k1': 'foobar'} for state in states_not_permitted: wf_exec = copy.deepcopy(wf_exec_template) wf_exec['state'] = state with db_api.transaction(): created = db_api.create_workflow_execution(wf_exec) self.assertIsNone(created.updated_at) self.assertRaises( exc.NotAllowedException, wf_service.update_workflow_execution_env, created, update_env ) fetched = db_api.get_workflow_execution(created.id) self.assertDictEqual( wf_exec['params']['env'], fetched.params['env'] ) self.assertDictEqual( wf_exec['context']['__env'], fetched.context['__env'] ) mistral-6.0.0/mistral/tests/unit/services/test_event_engine.py0000666000175100017510000001771413245513262024731 0ustar zuulzuul00000000000000# Copyright 2016 Catalyst IT Ltd # Copyright 2017 Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import time import mock from oslo_config import cfg from mistral import context as auth_context from mistral.db.v2.sqlalchemy import api as db_api from mistral.event_engine import default_event_engine as evt_eng from mistral.rpc import clients as rpc from mistral.services import workflows from mistral.tests.unit import base WORKFLOW_LIST = """ --- version: '2.0' my_wf: type: direct tasks: task1: action: std.echo output='Hi!' """ EXCHANGE_TOPIC = ('openstack', 'notification') EVENT_TYPE = 'compute.instance.create.start' EVENT_TRIGGER = { 'name': 'trigger1', 'workflow_id': '', 'workflow_input': {}, 'workflow_params': {}, 'exchange': 'openstack', 'topic': 'notification', 'event': EVENT_TYPE, } cfg.CONF.set_default('auth_enable', False, group='pecan') class EventEngineTest(base.DbTestCase): def setUp(self): super(EventEngineTest, self).setUp() self.wf = workflows.create_workflows(WORKFLOW_LIST)[0] EVENT_TRIGGER['workflow_id'] = self.wf.id @mock.patch.object(rpc, 'get_engine_client', mock.Mock()) def test_event_engine_start_with_no_triggers(self): e_engine = evt_eng.DefaultEventEngine() self.addCleanup(e_engine.handler_tg.stop) self.assertEqual(0, len(e_engine.event_triggers_map)) self.assertEqual(0, len(e_engine.exchange_topic_events_map)) self.assertEqual(0, len(e_engine.exchange_topic_listener_map)) @mock.patch('mistral.messaging.start_listener') @mock.patch.object(rpc, 'get_engine_client', mock.Mock()) def test_event_engine_start_with_triggers(self, mock_start): trigger = db_api.create_event_trigger(EVENT_TRIGGER) e_engine = evt_eng.DefaultEventEngine() self.addCleanup(e_engine.handler_tg.stop) self.assertEqual(1, len(e_engine.exchange_topic_events_map)) self.assertEqual( EVENT_TYPE, list(e_engine.exchange_topic_events_map[EXCHANGE_TOPIC])[0] ) self.assertEqual(1, len(e_engine.event_triggers_map)) self.assertEqual(1, len(e_engine.event_triggers_map[EVENT_TYPE])) self._assert_dict_contains_subset( trigger.to_dict(), e_engine.event_triggers_map[EVENT_TYPE][0] ) self.assertEqual(1, len(e_engine.exchange_topic_listener_map)) @mock.patch('mistral.messaging.start_listener') @mock.patch.object(rpc, 'get_engine_client', mock.Mock()) def test_event_engine_public_trigger(self, mock_start): t = copy.deepcopy(EVENT_TRIGGER) # Create public trigger as an admin self.ctx = base.get_context(default=False, admin=True) auth_context.set_ctx(self.ctx) t['scope'] = 'public' t['project_id'] = self.ctx.tenant trigger = db_api.create_event_trigger(t) # Switch to the user. self.ctx = base.get_context(default=True) auth_context.set_ctx(self.ctx) e_engine = evt_eng.DefaultEventEngine() self.addCleanup(e_engine.handler_tg.stop) event = { 'event_type': EVENT_TYPE, 'payload': {}, 'publisher': 'fake_publisher', 'timestamp': '', 'context': { 'project_id': '%s' % self.ctx.project_id, 'user_id': 'fake_user' }, } # Moreover, assert that trigger.project_id != event.project_id self.assertNotEqual( trigger.project_id, event['context']['project_id'] ) with mock.patch.object(e_engine, 'engine_client') as client_mock: e_engine.event_queue.put(event) time.sleep(1) self.assertEqual(1, client_mock.start_workflow.call_count) args, kwargs = client_mock.start_workflow.call_args self.assertEqual((EVENT_TRIGGER['workflow_id'], '', {}), args) self.assertDictEqual( { 'service': 'fake_publisher', 'project_id': '%s' % self.ctx.project_id, 'user_id': 'fake_user', 'timestamp': '' }, kwargs['event_params'] ) @mock.patch('mistral.messaging.start_listener') @mock.patch.object(rpc, 'get_engine_client', mock.Mock()) def test_process_event_queue(self, mock_start): EVENT_TRIGGER['project_id'] = self.ctx.project_id db_api.create_event_trigger(EVENT_TRIGGER) e_engine = evt_eng.DefaultEventEngine() self.addCleanup(e_engine.handler_tg.stop) event = { 'event_type': EVENT_TYPE, 'payload': {}, 'publisher': 'fake_publisher', 'timestamp': '', 'context': { 'project_id': '%s' % self.ctx.project_id, 'user_id': 'fake_user' }, } with mock.patch.object(e_engine, 'engine_client') as client_mock: e_engine.event_queue.put(event) time.sleep(1) self.assertEqual(1, client_mock.start_workflow.call_count) args, kwargs = client_mock.start_workflow.call_args self.assertEqual((EVENT_TRIGGER['workflow_id'], '', {}), args) self.assertDictEqual( { 'service': 'fake_publisher', 'project_id': '%s' % self.ctx.project_id, 'user_id': 'fake_user', 'timestamp': '' }, kwargs['event_params'] ) class NotificationsConverterTest(base.BaseTest): def test_convert(self): definition_cfg = [ { 'event_types': EVENT_TYPE, 'properties': {'resource_id': '<% $.payload.instance_id %>'} } ] converter = evt_eng.NotificationsConverter() converter.definitions = [evt_eng.EventDefinition(event_def) for event_def in reversed(definition_cfg)] notification = { 'event_type': EVENT_TYPE, 'payload': {'instance_id': '12345'}, 'publisher': 'fake_publisher', 'timestamp': '', 'context': {'project_id': 'fake_project', 'user_id': 'fake_user'} } event = converter.convert(EVENT_TYPE, notification) self.assertDictEqual( {'resource_id': '12345'}, event ) def test_convert_event_type_not_defined(self): definition_cfg = [ { 'event_types': EVENT_TYPE, 'properties': {'resource_id': '<% $.payload.instance_id %>'} } ] converter = evt_eng.NotificationsConverter() converter.definitions = [evt_eng.EventDefinition(event_def) for event_def in reversed(definition_cfg)] notification = { 'event_type': 'fake_event', 'payload': {'instance_id': '12345'}, 'publisher': 'fake_publisher', 'timestamp': '', 'context': {'project_id': 'fake_project', 'user_id': 'fake_user'} } event = converter.convert('fake_event', notification) self.assertDictEqual( { 'service': 'fake_publisher', 'project_id': 'fake_project', 'user_id': 'fake_user', 'timestamp': '' }, event ) mistral-6.0.0/mistral/tests/unit/services/test_expiration_policy.py0000666000175100017510000003012413245513262026012 0ustar zuulzuul00000000000000# Copyright 2015 - Alcatel-lucent, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime from mistral import context as ctx from mistral.db.v2 import api as db_api from mistral.services import expiration_policy from mistral.services.expiration_policy import ExecutionExpirationPolicy from mistral.tests.unit import base from mistral.tests.unit.base import get_context from oslo_config import cfg def _create_workflow_executions(): time_now = datetime.datetime.utcnow() wf_execs = [ { 'id': 'success_expired', 'name': 'success_expired', 'created_at': time_now - datetime.timedelta(minutes=60), 'updated_at': time_now - datetime.timedelta(minutes=59), 'workflow_name': 'test_exec', 'state': "SUCCESS", }, { 'id': 'error_expired', 'name': 'error_expired', 'created_at': time_now - datetime.timedelta(days=3, minutes=10), 'updated_at': time_now - datetime.timedelta(days=3), 'workflow_name': 'test_exec', 'state': "ERROR", }, { 'id': 'running_not_expired', 'name': 'running_not_expired', 'created_at': time_now - datetime.timedelta(days=3, minutes=10), 'updated_at': time_now - datetime.timedelta(days=3), 'workflow_name': 'test_exec', 'state': "RUNNING", }, { 'id': 'running_not_expired2', 'name': 'running_not_expired2', 'created_at': time_now - datetime.timedelta(days=3, minutes=10), 'updated_at': time_now - datetime.timedelta(days=4), 'workflow_name': 'test_exec', 'state': "RUNNING", }, { 'id': 'success_not_expired', 'name': 'success_not_expired', 'created_at': time_now - datetime.timedelta(minutes=15), 'updated_at': time_now - datetime.timedelta(minutes=5), 'workflow_name': 'test_exec', 'state': "SUCCESS", }, { 'id': 'abc', 'name': 'cancelled_expired', 'created_at': time_now - datetime.timedelta(minutes=60), 'updated_at': time_now - datetime.timedelta(minutes=59), 'workflow_name': 'test_exec', 'state': "CANCELLED", }, { 'id': 'cancelled_not_expired', 'name': 'cancelled_not_expired', 'created_at': time_now - datetime.timedelta(minutes=15), 'updated_at': time_now - datetime.timedelta(minutes=6), 'workflow_name': 'test_exec', 'state': "CANCELLED", } ] for wf_exec in wf_execs: db_api.create_workflow_execution(wf_exec) # Create a nested workflow execution. db_api.create_task_execution( { 'id': 'running_not_expired', 'workflow_execution_id': 'success_not_expired', 'name': 'my_task' } ) db_api.create_workflow_execution( { 'id': 'expired_but_not_a_parent', 'name': 'expired_but_not_a_parent', 'created_at': time_now - datetime.timedelta(days=15), 'updated_at': time_now - datetime.timedelta(days=10), 'workflow_name': 'test_exec', 'state': "SUCCESS", 'task_execution_id': 'running_not_expired' } ) def _switch_context(is_default, is_admin): ctx.set_ctx(get_context(is_default, is_admin)) class ExpirationPolicyTest(base.DbTestCase): def test_expiration_policy_for_executions_with_different_project_id(self): # Delete execution uses a secured filtering and we need # to verify that admin able to do that for other projects. cfg.CONF.set_default('auth_enable', True, group='pecan') # Since we are removing other projects execution, # we want to load the executions with other project_id. _switch_context(False, False) _create_workflow_executions() now = datetime.datetime.utcnow() # This execution has a parent wf and testing that we are # querying only for parent wfs. exec_child = db_api.get_workflow_execution('expired_but_not_a_parent') self.assertEqual('running_not_expired', exec_child.task_execution_id) # Call for all expired wfs execs. execs = db_api.get_expired_executions(now) # Should be only 5, the RUNNING execution shouldn't return, # so the child wf (that has parent task id). self.assertEqual(5, len(execs)) # Switch context to Admin since expiration policy running as Admin. _switch_context(True, True) _set_expiration_policy_config(evaluation_interval=1, older_than=30) expiration_policy.run_execution_expiration_policy(self, ctx) # Only non_expired available (update_at < older_than). execs = db_api.get_expired_executions(now) self.assertEqual(2, len(execs)) self.assertListEqual( [ 'cancelled_not_expired', 'success_not_expired' ], sorted([ex.id for ex in execs]) ) _set_expiration_policy_config(evaluation_interval=1, older_than=5) expiration_policy.run_execution_expiration_policy(self, ctx) execs = db_api.get_expired_executions(now) self.assertEqual(0, len(execs)) def test_deletion_of_expired_executions_with_batch_size_scenario1(self): """scenario1 This test will use batch_size of 3, 5 expired executions and different values of "older_than" which is 30 and 5 minutes respectively. Expected_result: All expired executions are successfully deleted. """ _create_workflow_executions() now = datetime.datetime.utcnow() _set_expiration_policy_config( evaluation_interval=1, older_than=30, batch_size=3 ) expiration_policy.run_execution_expiration_policy(self, ctx) execs = db_api.get_expired_executions(now) self.assertEqual(2, len(execs)) _set_expiration_policy_config(evaluation_interval=1, older_than=5) expiration_policy.run_execution_expiration_policy(self, ctx) execs = db_api.get_expired_executions(now) self.assertEqual(0, len(execs)) def test_deletion_of_expired_executions_with_batch_size_scenario2(self): """scenario2 This test will use batch_size of 2, 5 expired executions with value of "older_than" that is 5 minutes. Expected_result: All expired executions are successfully deleted. """ _create_workflow_executions() now = datetime.datetime.utcnow() _set_expiration_policy_config( evaluation_interval=1, older_than=5, batch_size=2 ) expiration_policy.run_execution_expiration_policy(self, ctx) execs = db_api.get_expired_executions(now) self.assertEqual(0, len(execs)) def test_expiration_policy_for_executions_with_max_executions_scen1(self): """scenario1 Tests the max_executions logic with max_finished_executions = 'total not expired and completed executions' - 1 """ _create_workflow_executions() _set_expiration_policy_config( evaluation_interval=1, older_than=30, mfe=1 ) expiration_policy.run_execution_expiration_policy(self, ctx) # Assert the two running executions # (running_not_expired, running_not_expired2), # the sub execution (expired_but_not_a_parent) and the one allowed # finished execution (success_not_expired) are there. execs = db_api.get_workflow_executions() self.assertEqual(4, len(execs)) self.assertListEqual( [ 'expired_but_not_a_parent', 'running_not_expired', 'running_not_expired2', 'success_not_expired' ], sorted([ex.id for ex in execs]) ) def test_expiration_policy_for_executions_with_max_executions_scen2(self): """scenario2 Tests the max_executions logic with: max_finished_executions > total completed executions """ _create_workflow_executions() _set_expiration_policy_config( evaluation_interval=1, older_than=30, mfe=100 ) expiration_policy.run_execution_expiration_policy(self, ctx) # Assert the two running executions # (running_not_expired, running_not_expired2), the sub execution # (expired_but_not_a_parent) and the all finished execution # (success_not_expired, 'cancelled_not_expired') are there. execs = db_api.get_workflow_executions() self.assertEqual(5, len(execs)) self.assertListEqual( [ 'cancelled_not_expired', 'expired_but_not_a_parent', 'running_not_expired', 'running_not_expired2', 'success_not_expired' ], sorted([ex.id for ex in execs]) ) def test_periodic_task_parameters(self): _set_expiration_policy_config( evaluation_interval=17, older_than=13 ) e_policy = expiration_policy.ExecutionExpirationPolicy(cfg.CONF) self.assertEqual( 17 * 60, e_policy._periodic_spacing['run_execution_expiration_policy'] ) def test_periodic_task_scheduling(self): def _assert_scheduling(expiration_policy_config, should_schedule): ExecutionExpirationPolicy._periodic_tasks = [] _set_expiration_policy_config(*expiration_policy_config) e_policy = expiration_policy.ExecutionExpirationPolicy(cfg.CONF) if should_schedule: self.assertTrue( e_policy._periodic_tasks, "Periodic task should have been created." ) else: self.assertFalse( e_policy._periodic_tasks, "Periodic task shouldn't have been created." ) _assert_scheduling([1, 1, None, None], True) _assert_scheduling([1, None, 1, None], True) _assert_scheduling([1, 1, 1, None], True) _assert_scheduling([1, None, None, None], False) _assert_scheduling([None, 1, 1, None], False) _assert_scheduling([None, 1, 1, None], False) _assert_scheduling([1, 0, 0, 0], False) _assert_scheduling([0, 1, 1, 0], False) _assert_scheduling([0, 1, 1, 0], False) def tearDown(self): """Restores the size limit config to default.""" super(ExpirationPolicyTest, self).tearDown() cfg.CONF.set_default('auth_enable', False, group='pecan') ctx.set_ctx(None) _set_expiration_policy_config(None, None, None, None) def _set_expiration_policy_config(evaluation_interval, older_than, mfe=0, batch_size=0): cfg.CONF.set_default( 'evaluation_interval', evaluation_interval, group='execution_expiration_policy' ) cfg.CONF.set_default( 'older_than', older_than, group='execution_expiration_policy' ) cfg.CONF.set_default( 'max_finished_executions', mfe, group='execution_expiration_policy' ) cfg.CONF.set_default( 'batch_size', batch_size, group='execution_expiration_policy' ) mistral-6.0.0/mistral/tests/unit/expressions/0000775000175100017510000000000013245513604021375 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/expressions/__init__.py0000666000175100017510000000000013245513262023476 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/expressions/test_jinja_expression.py0000666000175100017510000004651113245513262026371 0ustar zuulzuul00000000000000# Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import mock from mistral.db.v2.sqlalchemy import api as db_api from mistral import exceptions as exc from mistral.expressions import jinja_expression as expr from mistral.tests.unit import base from mistral import utils DATA = { "server": { "id": "03ea824a-aa24-4105-9131-66c48ae54acf", "name": "cloud-fedora", "status": "ACTIVE" }, "status": "OK" } SERVERS = { "servers": [ {'name': 'centos'}, {'name': 'ubuntu'}, {'name': 'fedora'} ] } WF_EXECS = [ { 'spec': {}, 'id': "one", 'start_params': {'task': 'my_task1'}, 'state': 'IDLE', 'state_info': "Running...", 'created_at': datetime.datetime(2016, 12, 1, 15, 0, 0), 'updated_at': None, 'context': None, 'task_id': None, 'trust_id': None, 'description': None, 'output': None }, { 'spec': {}, 'id': "two", 'root_execution_id': "one", 'start_params': {'task': 'my_task1'}, 'state': 'RUNNING', 'state_info': "Running...", 'created_at': datetime.datetime(2016, 12, 1, 15, 1, 0), 'updated_at': None, 'context': {'image_id': '123123'}, 'task_id': None, 'trust_id': None, 'description': None, 'output': None } ] class JinjaEvaluatorTest(base.BaseTest): def setUp(self): super(JinjaEvaluatorTest, self).setUp() self._evaluator = expr.JinjaEvaluator() def test_expression_result(self): res = self._evaluator.evaluate('_.server', DATA) self.assertEqual({ 'id': '03ea824a-aa24-4105-9131-66c48ae54acf', 'name': 'cloud-fedora', 'status': 'ACTIVE' }, res) res = self._evaluator.evaluate('_.server.id', DATA) self.assertEqual('03ea824a-aa24-4105-9131-66c48ae54acf', res) res = self._evaluator.evaluate("_.server.status == 'ACTIVE'", DATA) self.assertTrue(res) def test_wrong_expression(self): res = self._evaluator.evaluate("_.status == 'Invalid value'", DATA) self.assertFalse(res) # One thing to note about Jinja is that by default it would not raise # an exception on KeyError inside the expression, it will consider # value to be None. Same with NameError, it won't return an original # expression (which by itself seems confusing). Jinja allows us to # change behavior in both cases by switching to StrictUndefined, but # either one or the other will surely suffer. self.assertRaises( exc.JinjaEvaluationException, self._evaluator.evaluate, '_.wrong_key', DATA ) self.assertRaises( exc.JinjaEvaluationException, self._evaluator.evaluate, 'invalid_expression_string', DATA ) def test_select_result(self): res = self._evaluator.evaluate( '_.servers|selectattr("name", "equalto", "ubuntu")', SERVERS ) item = list(res)[0] self.assertEqual({'name': 'ubuntu'}, item) def test_function_string(self): self.assertEqual('3', self._evaluator.evaluate('_|string', '3')) self.assertEqual('3', self._evaluator.evaluate('_|string', 3)) def test_function_len(self): self.assertEqual(3, self._evaluator.evaluate('_|length', 'hey')) data = [{'some': 'thing'}] self.assertEqual( 1, self._evaluator.evaluate( '_|selectattr("some", "equalto", "thing")|list|length', data ) ) def test_validate(self): self._evaluator.validate('abc') self._evaluator.validate('1') self._evaluator.validate('1 + 2') self._evaluator.validate('_.a1') self._evaluator.validate('_.a1 * _.a2') def test_validate_failed(self): self.assertRaises(exc.JinjaGrammarException, self._evaluator.validate, '*') self.assertRaises(exc.JinjaEvaluationException, self._evaluator.validate, [1, 2, 3]) self.assertRaises(exc.JinjaEvaluationException, self._evaluator.validate, {'a': 1}) def test_function_json_pp(self): self.assertEqual('"3"', self._evaluator.evaluate('json_pp(_)', '3')) self.assertEqual('3', self._evaluator.evaluate('json_pp(_)', 3)) self.assertEqual( '[\n 1,\n 2\n]', self._evaluator.evaluate('json_pp(_)', [1, 2]) ) self.assertEqual( '{\n "a": "b"\n}', self._evaluator.evaluate('json_pp(_)', {'a': 'b'}) ) self.assertEqual( '"Mistral\nis\nawesome"', self._evaluator.evaluate( 'json_pp(_)', '\n'.join(['Mistral', 'is', 'awesome']) ) ) def test_filter_json_pp(self): self.assertEqual('"3"', self._evaluator.evaluate('_|json_pp', '3')) self.assertEqual('3', self._evaluator.evaluate('_|json_pp', 3)) self.assertEqual( '[\n 1,\n 2\n]', self._evaluator.evaluate('_|json_pp', [1, 2]) ) self.assertEqual( '{\n "a": "b"\n}', self._evaluator.evaluate('_|json_pp', {'a': 'b'}) ) self.assertEqual( '"Mistral\nis\nawesome"', self._evaluator.evaluate( '_|json_pp', '\n'.join(['Mistral', 'is', 'awesome']) ) ) def test_function_uuid(self): uuid = self._evaluator.evaluate('uuid()', {}) self.assertTrue(utils.is_valid_uuid(uuid)) def test_filter_uuid(self): uuid = self._evaluator.evaluate('_|uuid', '3') self.assertTrue(utils.is_valid_uuid(uuid)) def test_function_env(self): ctx = {'__env': 'some'} self.assertEqual(ctx['__env'], self._evaluator.evaluate('env()', ctx)) def test_filter_env(self): ctx = {'__env': 'some'} self.assertEqual(ctx['__env'], self._evaluator.evaluate('_|env', ctx)) @mock.patch('mistral.db.v2.api.get_task_executions') @mock.patch('mistral.workflow.data_flow.get_task_execution_result') def test_filter_task_without_task_execution(self, task_execution_result, task_executions): task = mock.MagicMock(return_value={}) task_executions.return_value = [task] ctx = { '__task_execution': None, '__execution': { 'id': 'some' } } result = self._evaluator.evaluate('_|task("some")', ctx) self.assertEqual({ 'id': task.id, 'name': task.name, 'published': task.published, 'result': task_execution_result(), 'spec': task.spec, 'state': task.state, 'state_info': task.state_info, 'type': task.type, 'workflow_execution_id': task.workflow_execution_id, 'created_at': task.created_at.isoformat(' '), 'updated_at': task.updated_at.isoformat(' ') }, result) @mock.patch('mistral.db.v2.api.get_task_executions') @mock.patch('mistral.workflow.data_flow.get_task_execution_result') def test_filter_tasks_without_task_execution(self, task_execution_result, task_executions): task = mock.MagicMock(return_value={}) task_executions.return_value = [task] ctx = { '__task_execution': None, '__execution': { 'id': 'some' } } result = self._evaluator.evaluate('_|tasks()', ctx) self.assertEqual([{ 'id': task.id, 'name': task.name, 'published': task.published, 'result': task_execution_result(), 'spec': task.spec, 'state': task.state, 'state_info': task.state_info, 'type': task.type, 'workflow_execution_id': task.workflow_execution_id, 'created_at': task.created_at.isoformat(' '), 'updated_at': task.updated_at.isoformat(' ') }], result) @mock.patch('mistral.db.v2.api.get_task_execution') @mock.patch('mistral.workflow.data_flow.get_task_execution_result') def test_filter_task_with_taskexecution(self, task_execution_result, task_execution): ctx = { '__task_execution': { 'id': 'some', 'name': 'some' } } result = self._evaluator.evaluate('_|task("some")', ctx) self.assertEqual({ 'id': task_execution().id, 'name': task_execution().name, 'published': task_execution().published, 'result': task_execution_result(), 'spec': task_execution().spec, 'state': task_execution().state, 'state_info': task_execution().state_info, 'type': task_execution().type, 'workflow_execution_id': task_execution().workflow_execution_id, 'created_at': task_execution().created_at.isoformat(' '), 'updated_at': task_execution().updated_at.isoformat(' ') }, result) @mock.patch('mistral.db.v2.api.get_task_execution') @mock.patch('mistral.workflow.data_flow.get_task_execution_result') def test_function_task(self, task_execution_result, task_execution): ctx = { '__task_execution': { 'id': 'some', 'name': 'some' } } result = self._evaluator.evaluate('task("some")', ctx) self.assertEqual({ 'id': task_execution().id, 'name': task_execution().name, 'published': task_execution().published, 'result': task_execution_result(), 'spec': task_execution().spec, 'state': task_execution().state, 'state_info': task_execution().state_info, 'type': task_execution().type, 'workflow_execution_id': task_execution().workflow_execution_id, 'created_at': task_execution().created_at.isoformat(' '), 'updated_at': task_execution().updated_at.isoformat(' ') }, result) @mock.patch('mistral.db.v2.api.get_workflow_execution') def test_filter_execution(self, workflow_execution): wf_ex = mock.MagicMock(return_value={}) workflow_execution.return_value = wf_ex ctx = { '__execution': { 'id': 'some' } } result = self._evaluator.evaluate('_|execution', ctx) self.assertEqual({ 'id': wf_ex.id, 'name': wf_ex.name, 'spec': wf_ex.spec, 'input': wf_ex.input, 'params': wf_ex.params, 'created_at': wf_ex.created_at.isoformat(' '), 'updated_at': wf_ex.updated_at.isoformat(' ') }, result) def test_executions(self): with db_api.transaction(read_only=True): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) ctx = { '__execution': { 'id': 'some' } } result = self._evaluator.evaluate('_|executions()', ctx) self.assertEqual([created0, created1], result) def test_executions_id_filter(self): with db_api.transaction(read_only=True): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) ctx = { '__execution': { 'id': 'some' } } result = self._evaluator.evaluate('_|executions("one")', ctx) self.assertEqual([created0], result) result = self._evaluator.evaluate( 'executions(root_execution_id="one") ', ctx ) self.assertEqual([created1], result) def test_executions_state_filter(self): with db_api.transaction(read_only=True): db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) ctx = { '__execution': { 'id': 'some' } } result = self._evaluator.evaluate( '_|executions(state="RUNNING")', ctx ) self.assertEqual([created1], result) result = self._evaluator.evaluate( '_|executions(id="one", state="RUNNING")', ctx ) self.assertEqual([], result) def test_executions_from_time_filter(self): with db_api.transaction(read_only=True): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) ctx = { '__execution': { 'id': 'some' } } result = self._evaluator.evaluate( '_|executions(from_time="2000-01-01")', ctx ) self.assertEqual([created0, created1], result) result = self._evaluator.evaluate( '_|executions(from_time="2016-12-01 15:01:00")', ctx ) self.assertEqual([created1], result) result = self._evaluator.evaluate( '_|executions(id="one", from_time="2016-12-01 15:01:00")', ctx ) self.assertEqual([], result) def test_executions_to_time_filter(self): with db_api.transaction(read_only=True): created0 = db_api.create_workflow_execution(WF_EXECS[0]) created1 = db_api.create_workflow_execution(WF_EXECS[1]) ctx = { '__execution': { 'id': 'some' } } result = self._evaluator.evaluate( '_|executions(to_time="2020-01-01")', ctx ) self.assertEqual([created0, created1], result) result = self._evaluator.evaluate( '_|executions(to_time="2016-12-01 15:01:00")', ctx ) self.assertEqual([created0], result) result = self._evaluator.evaluate( '_|executions(id="two", to_time="2016-12-01 15:01:00")', ctx ) self.assertEqual([], result) @mock.patch('mistral.db.v2.api.get_workflow_execution') def test_function_execution(self, workflow_execution): wf_ex = mock.MagicMock(return_value={}) workflow_execution.return_value = wf_ex ctx = { '__execution': { 'id': 'some' } } result = self._evaluator.evaluate('execution()', ctx) self.assertEqual({ 'id': wf_ex.id, 'name': wf_ex.name, 'spec': wf_ex.spec, 'input': wf_ex.input, 'params': wf_ex.params, 'created_at': wf_ex.created_at.isoformat(' '), 'updated_at': wf_ex.updated_at.isoformat(' ') }, result) class InlineJinjaEvaluatorTest(base.BaseTest): def setUp(self): super(InlineJinjaEvaluatorTest, self).setUp() self._evaluator = expr.InlineJinjaEvaluator() def test_multiple_placeholders(self): expr_str = """ Statistics for tenant "{{ _.project_id }}" Number of virtual machines: {{ _.vm_count }} Number of active virtual machines: {{ _.active_vm_count }} Number of networks: {{ _.net_count }} -- Sincerely, Mistral Team. """ result = self._evaluator.evaluate( expr_str, { 'project_id': '1-2-3-4', 'vm_count': 28, 'active_vm_count': 0, 'net_count': 1 } ) expected_result = """ Statistics for tenant "1-2-3-4" Number of virtual machines: 28 Number of active virtual machines: 0 Number of networks: 1 -- Sincerely, Mistral Team. """ self.assertEqual(expected_result, result) def test_block_placeholders(self): expr_str = """ Statistics for tenant "{{ _.project_id }}" Number of virtual machines: {{ _.vm_count }} {% if _.active_vm_count %} Number of active virtual machines: {{ _.active_vm_count }} {% endif %} Number of networks: {{ _.net_count }} -- Sincerely, Mistral Team. """ result = self._evaluator.evaluate( expr_str, { 'project_id': '1-2-3-4', 'vm_count': 28, 'active_vm_count': 0, 'net_count': 1 } ) expected_result = """ Statistics for tenant "1-2-3-4" Number of virtual machines: 28 Number of networks: 1 -- Sincerely, Mistral Team. """ self.assertEqual(expected_result, result) def test_single_value_casting(self): self.assertEqual(3, self._evaluator.evaluate('{{ _ }}', 3)) self.assertEqual('33', self._evaluator.evaluate('{{ _ }}{{ _ }}', 3)) def test_multiple_expressions(self): context = {'dir': '/tmp', 'file': 'a.txt'} expected_result = '/tmp/a.txt' result = self._evaluator.evaluate('{{ _.dir }}/{{ _.file }}', context) self.assertEqual(expected_result, result) def test_function_string(self): self.assertEqual('3', self._evaluator.evaluate('{{ _|string }}', '3')) self.assertEqual('3', self._evaluator.evaluate('{{ _|string }}', 3)) def test_validate(self): self._evaluator.validate('There is no expression.') self._evaluator.validate('{{ abc }}') self._evaluator.validate('{{ 1 }}') self._evaluator.validate('{{ 1 + 2 }}') self._evaluator.validate('{{ _.a1 }}') self._evaluator.validate('{{ _.a1 * _.a2 }}') self._evaluator.validate('{{ _.a1 }} is {{ _.a2 }}') self._evaluator.validate('The value is {{ _.a1 }}.') def test_validate_failed(self): self.assertRaises(exc.JinjaGrammarException, self._evaluator.validate, 'The value is {{ * }}.') self.assertRaises(exc.JinjaEvaluationException, self._evaluator.validate, [1, 2, 3]) self.assertRaises(exc.JinjaEvaluationException, self._evaluator.validate, {'a': 1}) mistral-6.0.0/mistral/tests/unit/expressions/test_yaql_expression.py0000666000175100017510000002357113245513262026245 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2015 - StackStorm, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import datetime import json import sys import warnings import mock from mistral import exceptions as exc from mistral.expressions import yaql_expression as expr from mistral.tests.unit import base from mistral import utils DATA = { "server": { "id": "03ea824a-aa24-4105-9131-66c48ae54acf", "name": "cloud-fedora", "status": "ACTIVE" }, "status": "OK" } SERVERS = { "servers": [ {'name': 'centos'}, {'name': 'ubuntu'}, {'name': 'fedora'} ] } class YaqlEvaluatorTest(base.BaseTest): def setUp(self): super(YaqlEvaluatorTest, self).setUp() self._evaluator = expr.YAQLEvaluator() def test_expression_result(self): res = self._evaluator.evaluate('$.server', DATA) self.assertEqual({ 'id': "03ea824a-aa24-4105-9131-66c48ae54acf", 'name': 'cloud-fedora', 'status': 'ACTIVE' }, res) res = self._evaluator.evaluate('$.server.id', DATA) self.assertEqual('03ea824a-aa24-4105-9131-66c48ae54acf', res) res = self._evaluator.evaluate("$.server.status = 'ACTIVE'", DATA) self.assertTrue(res) def test_wrong_expression(self): res = self._evaluator.evaluate("$.status = 'Invalid value'", DATA) self.assertFalse(res) self.assertRaises( exc.YaqlEvaluationException, self._evaluator.evaluate, '$.wrong_key', DATA ) expression_str = 'invalid_expression_string' res = self._evaluator.evaluate(expression_str, DATA) self.assertEqual(expression_str, res) def test_select_result(self): res = self._evaluator.evaluate( '$.servers.where($.name = ubuntu)', SERVERS ) item = list(res)[0] self.assertEqual({'name': 'ubuntu'}, item) def test_function_string(self): self.assertEqual('3', self._evaluator.evaluate('str($)', '3')) self.assertEqual('3', self._evaluator.evaluate('str($)', 3)) def test_function_len(self): self.assertEqual(3, self._evaluator.evaluate('len($)', 'hey')) data = [{'some': 'thing'}] self.assertEqual( 1, self._evaluator.evaluate('$.where($.some = thing).len()', data) ) def test_validate(self): self._evaluator.validate('abc') self._evaluator.validate('1') self._evaluator.validate('1 + 2') self._evaluator.validate('$.a1') self._evaluator.validate('$.a1 * $.a2') def test_validate_failed(self): self.assertRaises(exc.YaqlGrammarException, self._evaluator.validate, '*') self.assertRaises(exc.YaqlGrammarException, self._evaluator.validate, [1, 2, 3]) self.assertRaises(exc.YaqlGrammarException, self._evaluator.validate, {'a': 1}) def test_function_json_pp(self): self.assertEqual('"3"', self._evaluator.evaluate('json_pp($)', '3')) self.assertEqual('3', self._evaluator.evaluate('json_pp($)', 3)) self.assertEqual( '[\n 1,\n 2\n]', self._evaluator.evaluate('json_pp($)', [1, 2]) ) self.assertEqual( '{\n "a": "b"\n}', self._evaluator.evaluate('json_pp($)', {'a': 'b'}) ) self.assertEqual( '"Mistral\nis\nawesome"', self._evaluator.evaluate( 'json_pp($)', '\n'.join(['Mistral', 'is', 'awesome']) ) ) def test_function_json_pp_deprecation(self): with warnings.catch_warnings(record=True) as w: # ensure warnings aren't suppressed from other tests for name, mod in list(sys.modules.items()): getattr(mod, '__warningregistry__', dict()).clear() warnings.simplefilter('always') result = self._evaluator.evaluate('json_pp($)', '3') self.assertEqual('"3"', result) self.assertEqual(len(w), 1) self.assertTrue(issubclass(w[-1].category, DeprecationWarning)) self.assertTrue(str(w[-1].message).startswith( "json_pp was deprecated in Queens and will be removed in the S " )) def test_function_json_dump(self): self.assertEqual('"3"', self._evaluator.evaluate('json_dump($)', '3')) self.assertEqual('3', self._evaluator.evaluate('json_dump($)', 3)) self.assertEqual( json.dumps([1, 2], indent=4), self._evaluator.evaluate('json_dump($)', [1, 2]) ) self.assertEqual( json.dumps({"a": "b"}, indent=4), self._evaluator.evaluate('json_dump($)', {'a': 'b'}) ) self.assertEqual( json.dumps('\n'.join(["Mistral", "is", "awesome"]), indent=4), self._evaluator.evaluate( 'json_dump($)', '\n'.join(['Mistral', 'is', 'awesome']) ) ) def test_function_uuid(self): uuid = self._evaluator.evaluate('uuid()', {}) self.assertTrue(utils.is_valid_uuid(uuid)) @mock.patch('mistral.db.v2.api.get_task_executions') @mock.patch('mistral.workflow.data_flow.get_task_execution_result') def test_filter_tasks_without_task_execution(self, task_execution_result, task_executions): task_execution_result.return_value = 'task_execution_result' time_now = utils.utc_now_sec() task = type("obj", (object,), { 'id': 'id', 'name': 'name', 'published': 'published', 'result': task_execution_result(), 'spec': 'spec', 'state': 'state', 'state_info': 'state_info', 'type': 'type', 'workflow_execution_id': 'workflow_execution_id', 'created_at': time_now, 'updated_at': time_now + datetime.timedelta(seconds=1), })() task_executions.return_value = [task] ctx = { '__task_execution': None, '__execution': { 'id': 'some' } } result = self._evaluator.evaluate('tasks(some)', ctx) self.assertEqual(1, len(result)) self.assertDictEqual({ 'id': task.id, 'name': task.name, 'published': task.published, 'result': task.result, 'spec': task.spec, 'state': task.state, 'state_info': task.state_info, 'type': task.type, 'workflow_execution_id': task.workflow_execution_id, 'created_at': task.created_at.isoformat(' '), 'updated_at': task.updated_at.isoformat(' ') }, result[0]) def test_function_env(self): ctx = {'__env': 'some'} self.assertEqual(ctx['__env'], self._evaluator.evaluate('env()', ctx)) class InlineYAQLEvaluatorTest(base.BaseTest): def setUp(self): super(InlineYAQLEvaluatorTest, self).setUp() self._evaluator = expr.InlineYAQLEvaluator() def test_multiple_placeholders(self): expr_str = """ Statistics for tenant "<% $.project_id %>" Number of virtual machines: <% $.vm_count %> Number of active virtual machines: <% $.active_vm_count %> Number of networks: <% $.net_count %> -- Sincerely, Mistral Team. """ result = self._evaluator.evaluate( expr_str, { 'project_id': '1-2-3-4', 'vm_count': 28, 'active_vm_count': 0, 'net_count': 1 } ) expected_result = """ Statistics for tenant "1-2-3-4" Number of virtual machines: 28 Number of active virtual machines: 0 Number of networks: 1 -- Sincerely, Mistral Team. """ self.assertEqual(expected_result, result) def test_single_value_casting(self): self.assertEqual(3, self._evaluator.evaluate('<% $ %>', 3)) self.assertEqual('33', self._evaluator.evaluate('<% $ %><% $ %>', 3)) def test_function_string(self): self.assertEqual('3', self._evaluator.evaluate('<% str($) %>', '3')) self.assertEqual('3', self._evaluator.evaluate('<% str($) %>', 3)) def test_validate(self): self._evaluator.validate('There is no expression.') self._evaluator.validate('<% abc %>') self._evaluator.validate('<% 1 %>') self._evaluator.validate('<% 1 + 2 %>') self._evaluator.validate('<% $.a1 %>') self._evaluator.validate('<% $.a1 * $.a2 %>') self._evaluator.validate('<% $.a1 %> is <% $.a2 %>') self._evaluator.validate('The value is <% $.a1 %>.') def test_validate_failed(self): self.assertRaises(exc.YaqlGrammarException, self._evaluator.validate, 'The value is <% * %>.') self.assertRaises(exc.YaqlEvaluationException, self._evaluator.validate, [1, 2, 3]) self.assertRaises(exc.YaqlEvaluationException, self._evaluator.validate, {'a': 1}) mistral-6.0.0/mistral/tests/unit/executors/0000775000175100017510000000000013245513604021034 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/executors/test_plugins.py0000666000175100017510000000247613245513272024142 0ustar zuulzuul00000000000000# Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from mistral.executors import base as exe from mistral.executors import default_executor as d_exe from mistral.executors import remote_executor as r_exe from mistral.tests.unit.executors import base LOG = logging.getLogger(__name__) class PluginTestCase(base.ExecutorTestCase): def tearDown(self): exe.cleanup() super(PluginTestCase, self).tearDown() def test_get_local_executor(self): executor = exe.get_executor('local') self.assertIsInstance(executor, d_exe.DefaultExecutor) def test_get_remote_executor(self): executor = exe.get_executor('remote') self.assertIsInstance(executor, r_exe.RemoteExecutor) mistral-6.0.0/mistral/tests/unit/executors/base.py0000666000175100017510000000150213245513262022320 0ustar zuulzuul00000000000000# Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from mistral.tests.unit.engine import base as engine_test_base LOG = logging.getLogger(__name__) class ExecutorTestCase(engine_test_base.EngineTestCase): pass mistral-6.0.0/mistral/tests/unit/executors/__init__.py0000666000175100017510000000000013245513262023135 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/unit/executors/test_local_executor.py0000666000175100017510000001241313245513262025460 0ustar zuulzuul00000000000000# Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg from oslo_log import log as logging from mistral.actions import std_actions from mistral.db.v2 import api as db_api from mistral.executors import base as exe from mistral.executors import remote_executor as r_exe from mistral.services import workbooks as wb_svc from mistral.tests.unit.executors import base from mistral.workflow import states LOG = logging.getLogger(__name__) # Use the set_default method to set value otherwise in certain test cases # the change in value is not permanent. cfg.CONF.set_default('auth_enable', False, group='pecan') @mock.patch.object( r_exe.RemoteExecutor, 'run_action', mock.MagicMock(return_value=None) ) class LocalExecutorTestCase(base.ExecutorTestCase): @classmethod def setUpClass(cls): super(LocalExecutorTestCase, cls).setUpClass() cfg.CONF.set_default('type', 'local', group='executor') @classmethod def tearDownClass(cls): exe.cleanup() cfg.CONF.set_default('type', 'remote', group='executor') super(LocalExecutorTestCase, cls).tearDownClass() @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1', # Mock task1 success. 'Task 2', # Mock task2 success. 'Task 3' # Mock task3 success. ] ) ) def test_run(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: action: std.echo output="Task 1" on-success: - t2 t2: action: std.echo output="Task 2" on-success: - t3 t3: action: std.echo output="Task 3" """ wb_svc.create_workbook_v2(wb_def) wf_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertIsNone(wf_ex.state_info) self.assertEqual(3, len(task_execs)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') task_3_ex = self._assert_single_item(task_execs, name='t3') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.SUCCESS, task_2_ex.state) self.assertEqual(states.SUCCESS, task_3_ex.state) # Make sure the remote executor is not called. self.assertFalse(r_exe.RemoteExecutor.run_action.called) @mock.patch.object( std_actions.EchoAction, 'run', mock.MagicMock( side_effect=[ 'Task 1.0', # Mock task1 success. 'Task 1.1', # Mock task1 success. 'Task 1.2', # Mock task1 success. 'Task 2' # Mock task2 success. ] ) ) def test_run_with_items(self): wb_def = """ version: '2.0' name: wb1 workflows: wf1: type: direct tasks: t1: with-items: i in <% list(range(0, 3)) %> action: std.echo output="Task 1.<% $.i %>" publish: v1: <% task(t1).result %> on-success: - t2 t2: action: std.echo output="Task 2" """ wb_svc.create_workbook_v2(wb_def) wf_ex = self.engine.start_workflow('wb1.wf1') self.await_workflow_success(wf_ex.id) with db_api.transaction(): wf_ex = db_api.get_workflow_execution(wf_ex.id) task_execs = wf_ex.task_executions self.assertEqual(states.SUCCESS, wf_ex.state) self.assertEqual(2, len(wf_ex.task_executions)) task_1_ex = self._assert_single_item(task_execs, name='t1') task_2_ex = self._assert_single_item(task_execs, name='t2') self.assertEqual(states.SUCCESS, task_1_ex.state) self.assertEqual(states.SUCCESS, task_2_ex.state) with db_api.transaction(): task_1_action_exs = db_api.get_action_executions( task_execution_id=task_1_ex.id ) self.assertEqual(3, len(task_1_action_exs)) # Make sure the remote executor is not called. self.assertFalse(r_exe.RemoteExecutor.run_action.called) mistral-6.0.0/mistral/tests/unit/test_launcher.py0000666000175100017510000000531313245513272022232 0ustar zuulzuul00000000000000# Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import eventlet from mistral.api import service as api_service from mistral.cmd import launch from mistral.tests.unit import base class ServiceLauncherTest(base.DbTestCase): def setUp(self): super(ServiceLauncherTest, self).setUp() self.override_config('enabled', False, group='cron_trigger') launch.reset_server_managers() def test_launch_all(self): eventlet.spawn(launch.launch_any, launch.LAUNCH_OPTIONS.keys()) for i in range(0, 50): svr_proc_mgr = launch.get_server_process_manager() svr_thrd_mgr = launch.get_server_thread_manager() if svr_proc_mgr and svr_thrd_mgr: break eventlet.sleep(0.1) self.assertIsNotNone(svr_proc_mgr) self.assertIsNotNone(svr_thrd_mgr) api_server = api_service.WSGIService('mistral_api') api_workers = api_server.workers self._await(lambda: len(svr_proc_mgr.children.keys()) == api_workers) self._await(lambda: len(svr_thrd_mgr.services.services) == 3) def test_launch_process(self): eventlet.spawn(launch.launch_any, ['api']) for i in range(0, 50): svr_proc_mgr = launch.get_server_process_manager() if svr_proc_mgr: break eventlet.sleep(0.1) svr_thrd_mgr = launch.get_server_thread_manager() self.assertIsNotNone(svr_proc_mgr) self.assertIsNone(svr_thrd_mgr) api_server = api_service.WSGIService('mistral_api') api_workers = api_server.workers self._await(lambda: len(svr_proc_mgr.children.keys()) == api_workers) def test_launch_thread(self): eventlet.spawn(launch.launch_any, ['engine']) for i in range(0, 50): svr_thrd_mgr = launch.get_server_thread_manager() if svr_thrd_mgr: break eventlet.sleep(0.1) svr_proc_mgr = launch.get_server_process_manager() self.assertIsNone(svr_proc_mgr) self.assertIsNotNone(svr_thrd_mgr) self._await(lambda: len(svr_thrd_mgr.services.services) == 1) mistral-6.0.0/mistral/tests/unit/test_coordination.py0000666000175100017510000001046413245513262023123 0ustar zuulzuul00000000000000# Copyright 2015 Huawei Technologies Co., Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import mock from oslo_config import cfg import six from mistral.service import coordination from mistral.tests.unit import base class ServiceCoordinatorTest(base.BaseTest): def test_start(self): cfg.CONF.set_default( 'backend_url', 'zake://', 'coordination' ) coordinator = coordination.ServiceCoordinator('fake_id') coordinator.start() self.assertTrue(coordinator.is_active()) def test_start_without_backend(self): cfg.CONF.set_default('backend_url', None, 'coordination') coordinator = coordination.ServiceCoordinator() coordinator.start() self.assertFalse(coordinator.is_active()) def test_stop_not_active(self): cfg.CONF.set_default('backend_url', None, 'coordination') coordinator = coordination.ServiceCoordinator() coordinator.start() coordinator.stop() self.assertFalse(coordinator.is_active()) def test_stop(self): cfg.CONF.set_default( 'backend_url', 'zake://', 'coordination' ) coordinator = coordination.ServiceCoordinator() coordinator.start() coordinator.stop() self.assertFalse(coordinator.is_active()) def test_join_group_not_active(self): cfg.CONF.set_default('backend_url', None, 'coordination') coordinator = coordination.ServiceCoordinator() coordinator.start() coordinator.join_group('fake_group') members = coordinator.get_members('fake_group') self.assertFalse(coordinator.is_active()) self.assertEqual(0, len(members)) def test_join_group_and_get_members(self): cfg.CONF.set_default( 'backend_url', 'zake://', 'coordination' ) coordinator = coordination.ServiceCoordinator(my_id='fake_id') coordinator.start() coordinator.join_group('fake_group') members = coordinator.get_members('fake_group') self.assertEqual(1, len(members)) self.assertItemsEqual((six.b('fake_id'),), members) def test_join_group_and_leave_group(self): cfg.CONF.set_default( 'backend_url', 'zake://', 'coordination' ) coordinator = coordination.ServiceCoordinator(my_id='fake_id') coordinator.start() coordinator.join_group('fake_group') members_before = coordinator.get_members('fake_group') coordinator.leave_group('fake_group') members_after = coordinator.get_members('fake_group') self.assertEqual(1, len(members_before)) self.assertEqual(set([six.b('fake_id')]), members_before) self.assertEqual(0, len(members_after)) self.assertEqual(set([]), members_after) class ServiceTest(base.BaseTest): def setUp(self): super(ServiceTest, self).setUp() # Re-initialize the global service coordinator object, in order to use # new coordination configuration. coordination.cleanup_service_coordinator() @mock.patch('mistral.utils.get_process_identifier', return_value='fake_id') def test_register_membership(self, mock_get_identifier): cfg.CONF.set_default('backend_url', 'zake://', 'coordination') srv = coordination.Service('fake_group') srv.register_membership() self.addCleanup(srv.stop) srv_coordinator = coordination.get_service_coordinator() self.assertIsNotNone(srv_coordinator) self.assertTrue(srv_coordinator.is_active()) members = srv_coordinator.get_members('fake_group') mock_get_identifier.assert_called_once_with() self.assertEqual(set([six.b('fake_id')]), members) mistral-6.0.0/mistral/tests/__init__.py0000666000175100017510000000000013245513261020134 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/resources/0000775000175100017510000000000013245513604020046 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/resources/wb_with_nested_wf.yaml0000666000175100017510000000040513245513261024433 0ustar zuulzuul00000000000000--- version: "2.0" name: wb_with_nested_wf workflows: wrapping_wf: type: direct tasks: call_inner_wf: workflow: inner_wf inner_wf: type: direct tasks: hello: action: std.echo output="Hello from inner workflow"mistral-6.0.0/mistral/tests/resources/workbook/0000775000175100017510000000000013245513604021703 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/resources/workbook/v2/0000775000175100017510000000000013245513604022232 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/resources/workbook/v2/my_workbook.yaml0000666000175100017510000000465713245513261025475 0ustar zuulzuul00000000000000version: '2.0' name: my_workbook description: This is a test workbook tags: [test, v2] actions: action1: description: This is a test ad-hoc action tags: [test, v2] base: std.echo base-input: output: Hello <% $.name %>! output: <% $ %> action2: description: This is a test ad-hoc action with base params tags: [test, v2] base: std.echo output="Echo output" output: <% $ %> workflows: wf1: description: This is a test workflow tags: [test, v2] type: reverse input: - name tasks: task1: description: This is a test task action: action1 name=<% $.name %> wait-before: 2 wait-after: 5 retry: count: 10 delay: 30 break-on: <% $.my_val = 10 %> concurrency: 3 task2: requires: [task1] action: std.echo output="Thanks <% $.name %>!" wf2: tags: [test, v2] type: direct task-defaults: retry: count: 10 delay: 30 break-on: <% $.my_val = 10 %> on-error: - fail: <% $.my_val = 0 %> on-success: - pause on-complete: - succeed tasks: task3: workflow: wf1 name="John Doe" age=32 param1=null param2=false on-error: - task4: <% $.my_val = 1 %> on-success: - task5: <% $.my_val = 2 %> on-complete: - task6: <% $.my_val = 3 %> task4: action: std.echo output="Task 4 echo" task5: action: std.echo output="Task 5 echo" task6: action: std.echo output="Task 6 echo" task7: with-items: vm_info in <% $.vms %> workflow: wf2 is_true=true object_list=[1, null, "str"] is_string="50" on-complete: - task9 - task10 task8: with-items: - itemX in <% $.arrayI %> - itemY in <% $.arrayJ %> workflow: wf2 expr_list=["<% $.v %>", "<% $.k %>"] expr=<% $.value %> target: nova on-complete: - task9 - task10 - task11 task9: join: all action: std.echo output="Task 9 echo" task10: join: 2 action: std.echo output="Task 10 echo" task11: join: one action: std.echo output="Task 11 echo" task12: action: std.http url="http://site.com?q=<% $.query %>" params="" task13: description: No-op task mistral-6.0.0/mistral/tests/resources/workbook/v2/workbook_schema_test.yaml0000666000175100017510000000211213245513261027327 0ustar zuulzuul00000000000000version: '2.0' name: workbook_schema_test description: > This is a test workbook to verify workbook the schema validation. Specifically we want to test the validation of workflow names. See bug #1645354 for more details. actions: actionversion: base: std.noop versionaction: base: std.noop actionversionaction: base: std.noop action-action: base: std.noop workflows: workflowversion: description: Workflow name ending with version tasks: task1: action: actionversion versionworkflow: description: Workflow name starting with version tasks: task1: action: versionaction workflowversionworkflow: description: Workflow name with version in the middle tasks: task1: action: actionversionaction version_workflow: description: Workflow name starting with version and an underscore tasks: task1: workflow: workflowversion workflow-with-hyphen: description: Workflow name containing - tasks: task1: action: action-action mistral-6.0.0/mistral/tests/resources/openstack/0000775000175100017510000000000013245513604022035 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/resources/openstack/action_collection_wb.yaml0000666000175100017510000000167513245513261027113 0ustar zuulzuul00000000000000--- version: '2.0' name: action_collection workflows: keystone: type: direct tasks: projects_list: action: keystone.projects_list publish: result: <% task().result %> nova: type: direct tasks: flavors_list: action: nova.flavors_list publish: result: <% task().result %> glance: type: direct tasks: images_list: action: glance.images_list publish: result: <% task().result %> heat: type: direct tasks: stacks_list: action: heat.stacks_list publish: result: <% task().result %> neutron: type: direct tasks: list_subnets: action: neutron.list_subnets publish: result: <% task().result %> cinder: type: direct tasks: volumes_list: action: cinder.volumes_list publish: result: <% task().result %> mistral-6.0.0/mistral/tests/resources/openstack/test_mapping.json0000666000175100017510000000101713245513261025422 0ustar zuulzuul00000000000000{ "_comment": "Mapping OpenStack action namespaces to all its actions. Each action name is mapped to python-client method name in this namespace.", "nova": { "servers_get": "servers.get", "servers_find": "servers.find", "volumes_delete_server_volume": "volumes.delete_server_volume" }, "keystone": { "users_list": "users.list", "trusts_create": "trusts.create" }, "glance": { "images_list": "images.list", "images_delete": "images.delete" } } mistral-6.0.0/mistral/tests/resources/wb_v1.yaml0000666000175100017510000000027113245513261021751 0ustar zuulzuul00000000000000Namespaces: Greetings: actions: hello: class: std.echo base-parameters: output: Hello! Workflow: tasks: hello: action: Greetings.hellomistral-6.0.0/mistral/tests/resources/wf_task_ex_concurrency.yaml0000666000175100017510000000025013245513261025474 0ustar zuulzuul00000000000000--- version: '2.0' test_task_ex_concurrency: tasks: task1: action: std.async_noop timeout: 2 task2: action: std.async_noop timeout: 2mistral-6.0.0/mistral/tests/resources/wb_v2.yaml0000666000175100017510000000030113245513261021744 0ustar zuulzuul00000000000000--- version: '2.0' name: test workflows: test: type: direct tasks: hello: action: std.echo output="Hello" publish: result: <% task(hello).result %> mistral-6.0.0/mistral/tests/resources/action_jinja.yaml0000666000175100017510000000045213245513261023364 0ustar zuulzuul00000000000000--- version: "2.0" greeting: description: "This action says 'Hello'" tags: [hello] base: std.echo base-input: output: 'Hello, {{ _.name }}' input: - name output: string: '{{ _ }}' farewell: base: std.echo base-input: output: 'Bye!' output: info: '{{ _ }}' mistral-6.0.0/mistral/tests/resources/for_wf_namespace/0000775000175100017510000000000013245513604023344 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/tests/resources/for_wf_namespace/middle_wf.yaml0000666000175100017510000000016213245513261026162 0ustar zuulzuul00000000000000--- version: '2.0' middle_wf: tasks: run_workflow_with_name_lowest_level_wf: workflow: lowest_level_wfmistral-6.0.0/mistral/tests/resources/for_wf_namespace/lowest_level_wf.yaml0000666000175100017510000000012213245513261027424 0ustar zuulzuul00000000000000--- version: '2.0' lowest_level_wf: tasks: noop_task: action: std.noopmistral-6.0.0/mistral/tests/resources/for_wf_namespace/top_level_wf.yaml0000666000175100017510000000015113245513261026713 0ustar zuulzuul00000000000000--- version: '2.0' top_level_wf: tasks: run_workflow_with_name_middle_wf: workflow: middle_wfmistral-6.0.0/mistral/tests/resources/action_v2.yaml0000666000175100017510000000044613245513261022623 0ustar zuulzuul00000000000000--- version: "2.0" greeting: description: "This action says 'Hello'" tags: [hello] base: std.echo base-input: output: 'Hello, <% $.name %>' input: - name output: string: <% $ %> farewell: base: std.echo base-input: output: 'Bye!' output: info: <% $ %> mistral-6.0.0/mistral/tests/resources/single_wf.yaml0000666000175100017510000000024113245513261022705 0ustar zuulzuul00000000000000--- version: '2.0' single_wf: type: direct tasks: hello: action: std.echo output="Hello" publish: result: <% task(hello).result %> mistral-6.0.0/mistral/tests/resources/wf_jinja.yaml0000666000175100017510000000102613245513261022521 0ustar zuulzuul00000000000000--- version: '2.0' wf: type: direct tasks: hello: action: std.echo output="Hello" wait-before: 1 publish: result: '{{ task("hello").result }}' wf1: type: reverse input: - farewell tasks: addressee: action: std.echo output="John" publish: name: '{{ task("addressee").result }}' goodbye: action: std.echo output="{{ _.farewell }}, {{ _.name }}" requires: [addressee] wf2: type: direct tasks: hello: action: std.echo output="Hello" mistral-6.0.0/mistral/tests/resources/wf_action_ex_concurrency.yaml0000666000175100017510000000024513245513261026013 0ustar zuulzuul00000000000000--- version: '2.0' test_action_ex_concurrency: tasks: test_with_items: with-items: index in <% range(2) %> action: std.echo output='<% $.index %>'mistral-6.0.0/mistral/tests/resources/wf_v2.yaml0000666000175100017510000000101613245513261021754 0ustar zuulzuul00000000000000--- version: '2.0' wf: type: direct tasks: hello: action: std.echo output="Hello" wait-before: 1 publish: result: <% task(hello).result %> wf1: type: reverse input: - farewell tasks: addressee: action: std.echo output="John" publish: name: <% task(addressee).result %> goodbye: action: std.echo output="<% $.farewell %>, <% $.name %>" requires: [addressee] wf2: type: direct tasks: hello: action: std.echo output="Hello" mistral-6.0.0/mistral/executors/0000775000175100017510000000000013245513604016713 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/executors/default_executor.py0000666000175100017510000001504713245513261022637 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from eventlet import timeout as ev_timeout from mistral_lib import actions as mistral_lib from oslo_log import log as logging from osprofiler import profiler from mistral.actions import action_factory as a_f from mistral import context from mistral import exceptions as exc from mistral.executors import base from mistral.rpc import clients as rpc from mistral.utils import inspect_utils as i_u LOG = logging.getLogger(__name__) class DefaultExecutor(base.Executor): def __init__(self): self._engine_client = rpc.get_engine_client() @profiler.trace('default-executor-run-action', hide_args=True) def run_action(self, action_ex_id, action_cls_str, action_cls_attrs, params, safe_rerun, execution_context, redelivered=False, target=None, async_=True, timeout=None): """Runs action. :param action_ex_id: Action execution id. :param action_cls_str: Path to action class in dot notation. :param action_cls_attrs: Attributes of action class which will be set to. :param params: Action parameters. :param safe_rerun: Tells if given action can be safely rerun. :param execution_context: A dict of values providing information about the current execution. :param redelivered: Tells if given action was run before on another executor. :param target: Target (group of action executors). :param async_: If True, run action in asynchronous mode (w/o waiting for completion). :param timeout: a period of time in seconds after which execution of action will be interrupted :return: Action result. """ def send_error_back(error_msg): error_result = mistral_lib.Result(error=error_msg) if action_ex_id: self._engine_client.on_action_complete( action_ex_id, error_result ) return None return error_result if redelivered and not safe_rerun: msg = ( "Request to run action %s was redelivered, but action %s " "cannot be re-run safely. The only safe thing to do is fail " "action." % (action_cls_str, action_cls_str) ) return send_error_back(msg) # Load action module. action_cls = a_f.construct_action_class( action_cls_str, action_cls_attrs ) # Instantiate action. try: action = action_cls(**params) except Exception as e: msg = ( "Failed to initialize action %s. Action init params = %s. " "Actual init params = %s. More info: %s" % ( action_cls_str, i_u.get_arg_list(action_cls.__init__), params.keys(), e ) ) LOG.warning(msg) return send_error_back(msg) # Run action. try: with ev_timeout.Timeout(seconds=timeout): # NOTE(d0ugal): If the action is a subclass of mistral-lib we # know that it expects to be passed the context. if isinstance(action, mistral_lib.Action): action_ctx = context.create_action_context( execution_context) result = action.run(action_ctx) else: result = action.run() # Note: it's made for backwards compatibility with already # existing Mistral actions which don't return result as # instance of workflow.utils.Result. if not isinstance(result, mistral_lib.Result): result = mistral_lib.Result(data=result) except BaseException as e: msg = ( "Failed to run action [action_ex_id=%s, action_cls='%s', " "attributes='%s', params='%s']\n %s" % ( action_ex_id, action_cls, action_cls_attrs, params, e ) ) LOG.exception(msg) return send_error_back(msg) # Send action result. try: if action_ex_id and (action.is_sync() or result.is_error()): self._engine_client.on_action_complete( action_ex_id, result, async_=True ) except exc.MistralException as e: # In case of a Mistral exception we can try to send error info to # engine because most likely it's not related to the infrastructure # such as message bus or network. One known case is when the action # returns a bad result (e.g. invalid unicode) which can't be # serialized. msg = ( "Failed to complete action due to a Mistral exception " "[action_ex_id=%s, action_cls='%s', " "attributes='%s', params='%s']\n %s" % ( action_ex_id, action_cls, action_cls_attrs, params, e ) ) LOG.exception(msg) return send_error_back(msg) except Exception as e: # If it's not a Mistral exception all we can do is only # log the error. msg = ( "Failed to complete action due to an unexpected exception " "[action_ex_id=%s, action_cls='%s', " "attributes='%s', params='%s']\n %s" % ( action_ex_id, action_cls, action_cls_attrs, params, e ) ) LOG.exception(msg) return result mistral-6.0.0/mistral/executors/executor_server.py0000666000175100017510000000712113245513261022513 0ustar zuulzuul00000000000000# Copyright 2016 - Nokia Networks # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from mistral import config as cfg from mistral.executors import default_executor as exe from mistral.rpc import base as rpc from mistral.service import base as service_base from mistral import utils from mistral.utils import profiler as profiler_utils LOG = logging.getLogger(__name__) class ExecutorServer(service_base.MistralService): """Executor server. This class manages executor life-cycle and gets registered as an RPC endpoint to process executor specific calls. It also registers a cluster member associated with this instance of executor. """ def __init__(self, executor, setup_profiler=True): super(ExecutorServer, self).__init__('executor_group', setup_profiler) self.executor = executor self._rpc_server = None def start(self): super(ExecutorServer, self).start() if self._setup_profiler: profiler_utils.setup('mistral-executor', cfg.CONF.executor.host) # Initialize and start RPC server. self._rpc_server = rpc.get_rpc_server_driver()(cfg.CONF.executor) self._rpc_server.register_endpoint(self) self._rpc_server.run(executor='threading') self._notify_started('Executor server started.') def stop(self, graceful=False): super(ExecutorServer, self).stop(graceful) if self._rpc_server: self._rpc_server.stop(graceful) def run_action(self, rpc_ctx, action_ex_id, action_cls_str, action_cls_attrs, params, safe_rerun, execution_context, timeout): """Receives calls over RPC to run action on executor. :param timeout: a period of time in seconds after which execution of action will be interrupted :param execution_context: A dict of values providing information about the current execution. :param rpc_ctx: RPC request context dictionary. :param action_ex_id: Action execution id. :param action_cls_str: Action class name. :param action_cls_attrs: Action class attributes. :param params: Action input parameters. :param safe_rerun: Tells if given action can be safely rerun. :return: Action result. """ LOG.info( "Received RPC request 'run_action'[action_ex_id=%s, " "action_cls_str=%s, action_cls_attrs=%s, params=%s, " "timeout=%s]", action_ex_id, action_cls_str, action_cls_attrs, utils.cut(params), timeout ) redelivered = rpc_ctx.redelivered or False return self.executor.run_action( action_ex_id, action_cls_str, action_cls_attrs, params, safe_rerun, execution_context, redelivered, timeout=timeout ) def get_oslo_service(setup_profiler=True): return ExecutorServer( exe.DefaultExecutor(), setup_profiler=setup_profiler ) mistral-6.0.0/mistral/executors/base.py0000666000175100017510000000465313245513261020210 0ustar zuulzuul00000000000000# Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import six from mistral import serialization from mistral_lib.actions import types from stevedore import driver _EXECUTORS = {} serialization.register_serializer(types.Result, types.ResultSerializer()) def cleanup(): global _EXECUTORS _EXECUTORS = {} def get_executor(exec_type): global _EXECUTORS if not _EXECUTORS.get(exec_type): mgr = driver.DriverManager( 'mistral.executors', exec_type, invoke_on_load=True ) _EXECUTORS[exec_type] = mgr.driver return _EXECUTORS[exec_type] @six.add_metaclass(abc.ABCMeta) class Executor(object): """Action executor interface.""" @abc.abstractmethod def run_action(self, action_ex_id, action_cls_str, action_cls_attrs, params, safe_rerun, execution_context, redelivered=False, target=None, async_=True, timeout=None): """Runs action. :param timeout: a period of time in seconds after which execution of action will be interrupted :param action_ex_id: Corresponding action execution id. :param action_cls_str: Path to action class in dot notation. :param action_cls_attrs: Attributes of action class which will be set to. :param params: Action parameters. :param safe_rerun: Tells if given action can be safely rerun. :param execution_context: A dict of values providing information about the current execution. :param redelivered: Tells if given action was run before on another executor. :param target: Target (group of action executors). :param async_: If True, run action in asynchronous mode (w/o waiting for completion). :return: Action result. """ raise NotImplementedError() mistral-6.0.0/mistral/executors/__init__.py0000666000175100017510000000000013245513261021013 0ustar zuulzuul00000000000000mistral-6.0.0/mistral/executors/remote_executor.py0000666000175100017510000000173613245513261022506 0ustar zuulzuul00000000000000# Copyright 2017 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from mistral.rpc import clients as rpc_clients LOG = logging.getLogger(__name__) class RemoteExecutor(rpc_clients.ExecutorClient): """Executor that passes execution request to a remote executor.""" def __init__(self): super(RemoteExecutor, self).__init__(cfg.CONF.executor) mistral-6.0.0/mistral/resources/0000775000175100017510000000000013245513604016704 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/resources/actions/0000775000175100017510000000000013245513604020344 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/resources/actions/wait_ssh.yaml0000666000175100017510000000041613245513261023053 0ustar zuulzuul00000000000000--- version: '2.0' std.wait_ssh: description: Simple SSH command. base: std.ssh base-input: host: <% $.host %> username: <% $.username %> password: <% $.password %> cmd: 'ls -l' input: - host - username - password mistral-6.0.0/mistral/resources/workflows/0000775000175100017510000000000013245513604020741 5ustar zuulzuul00000000000000mistral-6.0.0/mistral/resources/workflows/create_instance.yaml0000666000175100017510000000453213245513261024761 0ustar zuulzuul00000000000000--- version: '2.0' std.create_instance: type: direct description: | Creates VM and waits till VM OS is up and running. input: - name - image_id - flavor_id - ssh_username: null - ssh_password: null # Name of previously created keypair to inject into the instance. # Either ssh credentials or keypair must be provided. - key_name: null # Security_groups: A list of security group names - security_groups: null # An ordered list of nics to be added to this server, with information about connected networks, fixed IPs, port etc. # Example: nics: [{"net-id": "27aa8c1c-d6b8-4474-b7f7-6cdcf63ac856"}] - nics: null task-defaults: on-error: - delete_vm output: ip: <% $.vm_ip %> id: <% $.vm_id %> name: <% $.name %> status: <% $.status %> tasks: create_vm: description: Initial request to create a VM. action: nova.servers_create name=<% $.name %> image=<% $.image_id %> flavor=<% $.flavor_id %> input: key_name: <% $.key_name %> security_groups: <% $.security_groups %> nics: <% $.nics %> publish: vm_id: <% task(create_vm).result.id %> on-success: - search_for_ip search_for_ip: description: Gets first free ip from Nova floating IPs. action: nova.floating_ips_findall instance_id=null publish: vm_ip: <% task(search_for_ip).result[0].ip %> on-success: - wait_vm_active wait_vm_active: description: Waits till VM is ACTIVE. action: nova.servers_find id=<% $.vm_id %> status="ACTIVE" retry: count: 10 delay: 10 publish: status: <% task(wait_vm_active).result.status %> on-success: - associate_ip associate_ip: description: Associate server with one of floating IPs. action: nova.servers_add_floating_ip server=<% $.vm_id %> address=<% $.vm_ip %> wait-after: 5 on-success: - wait_ssh wait_ssh: description: Wait till operating system on the VM is up (SSH command). action: std.wait_ssh username=<% $.ssh_username %> password=<% $.ssh_password %> host=<% $.vm_ip %> retry: count: 10 delay: 10 delete_vm: description: Destroy VM. workflow: std.delete_instance instance_id=<% $.vm_id %> on-complete: - fail mistral-6.0.0/mistral/resources/workflows/delete_instance.yaml0000666000175100017510000000071013245513261024752 0ustar zuulzuul00000000000000--- version: "2.0" std.delete_instance: type: direct input: - instance_id description: Deletes VM. tasks: delete_vm: description: Destroy VM. action: nova.servers_delete server=<% $.instance_id %> wait-after: 10 on-success: - find_given_vm find_given_vm: description: Checks that VM is already deleted. action: nova.servers_find id=<% $.instance_id %> on-error: - succeed mistral-6.0.0/mistral/context.py0000666000175100017510000002246013245513261016735 0ustar zuulzuul00000000000000# Copyright 2013 - Mirantis, Inc. # Copyright 2016 - Brocade Communications Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import base64 from mistral_lib.actions import context as lib_ctx from oslo_config import cfg from oslo_context import context as oslo_context import oslo_messaging as messaging from oslo_serialization import jsonutils from osprofiler import profiler import pecan from pecan import hooks from mistral import auth from mistral import exceptions as exc from mistral import serialization from mistral import utils CONF = cfg.CONF _CTX_THREAD_LOCAL_NAME = "MISTRAL_APP_CTX_THREAD_LOCAL" ALLOWED_WITHOUT_AUTH = ['/', '/v2/'] class MistralContext(oslo_context.RequestContext): def __init__(self, auth_uri=None, auth_cacert=None, insecure=False, service_catalog=None, region_name=None, is_trust_scoped=False, redelivered=False, expires_at=None, trust_id=None, is_target=False, **kwargs): self.auth_uri = auth_uri self.auth_cacert = auth_cacert self.insecure = insecure self.service_catalog = service_catalog self.region_name = region_name self.is_trust_scoped = is_trust_scoped self.redelivered = redelivered self.expires_at = expires_at self.trust_id = trust_id self.is_target = is_target # We still use Mistral thread local variable. Maybe could consider # using the variable provided by oslo_context in future. super(MistralContext, self).__init__(overwrite=False, **kwargs) def to_dict(self): """Return a dictionary of context attributes.""" ctx_dict = super(MistralContext, self).to_dict() ctx_dict.update( { 'user_name': self.user_name, 'project_name': self.project_name, 'domain_name': self.domain_name, 'user_domain_name': self.user_domain_name, 'project_domain_name': self.project_domain_name, 'auth_uri': self.auth_uri, 'auth_cacert': self.auth_cacert, 'insecure': self.insecure, 'service_catalog': self.service_catalog, 'region_name': self.region_name, 'is_trust_scoped': self.is_trust_scoped, 'redelivered': self.redelivered, 'expires_at': self.expires_at, 'trust_id': self.trust_id, 'is_target': self.is_target, } ) return ctx_dict @classmethod def from_dict(cls, values, **kwargs): """Construct a context object from a provided dictionary.""" kwargs.setdefault('auth_uri', values.get('auth_uri')) kwargs.setdefault('auth_cacert', values.get('auth_cacert')) kwargs.setdefault('insecure', values.get('insecure', False)) kwargs.setdefault('service_catalog', values.get('service_catalog')) kwargs.setdefault('region_name', values.get('region_name')) kwargs.setdefault( 'is_trust_scoped', values.get('is_trust_scoped', False) ) kwargs.setdefault('redelivered', values.get('redelivered', False)) kwargs.setdefault('expires_at', values.get('expires_at')) kwargs.setdefault('trust_id', values.get('trust_id')) kwargs.setdefault('is_target', values.get('is_target', False)) return super(MistralContext, cls).from_dict(values, **kwargs) @classmethod def from_environ(cls, headers, env): kwargs = _extract_mistral_auth_params(headers) token_info = env.get('keystone.token_info', {}) if not kwargs['is_target']: kwargs['service_catalog'] = token_info.get('token', {}) kwargs['expires_at'] = (token_info['token']['expires_at'] if token_info else None) context = super(MistralContext, cls).from_environ(env, **kwargs) context.is_admin = True if 'admin' in context.roles else False return context def has_ctx(): return utils.has_thread_local(_CTX_THREAD_LOCAL_NAME) def ctx(): if not has_ctx(): raise exc.ApplicationContextNotFoundException() return utils.get_thread_local(_CTX_THREAD_LOCAL_NAME) def set_ctx(new_ctx): utils.set_thread_local(_CTX_THREAD_LOCAL_NAME, new_ctx) def _extract_mistral_auth_params(headers): service_catalog = None if headers.get("X-Target-Auth-Uri"): params = { # TODO(akovi): Target cert not handled yet 'auth_cacert': None, 'insecure': headers.get('X-Target-Insecure', False), 'auth_token': headers.get('X-Target-Auth-Token'), 'auth_uri': headers.get('X-Target-Auth-Uri'), 'tenant': headers.get('X-Target-Project-Id'), 'user': headers.get('X-Target-User-Id'), 'user_name': headers.get('X-Target-User-Name'), 'region_name': headers.get('X-Target-Region-Name'), 'is_target': True } if not params['auth_token']: raise (exc.MistralException( 'Target auth URI (X-Target-Auth-Uri) target auth token ' '(X-Target-Auth-Token) must be present')) # It's possible that target service catalog is not provided, in this # case, Mistral needs to get target service catalog dynamically when # talking to target openstack deployment later on. service_catalog = _extract_service_catalog_from_headers( headers ) else: params = { 'auth_uri': CONF.keystone_authtoken.auth_uri, 'auth_cacert': CONF.keystone_authtoken.cafile, 'insecure': False, 'region_name': headers.get('X-Region-Name'), 'is_target': False } params['service_catalog'] = service_catalog return params def _extract_service_catalog_from_headers(headers): target_service_catalog_header = headers.get( 'X-Target-Service-Catalog') if target_service_catalog_header: decoded_catalog = base64.b64decode( target_service_catalog_header).decode() return jsonutils.loads(decoded_catalog) else: return None class RpcContextSerializer(messaging.Serializer): def __init__(self, entity_serializer=None): self.entity_serializer = ( entity_serializer or serialization.get_polymorphic_serializer() ) def serialize_entity(self, context, entity): if not self.entity_serializer: return entity return self.entity_serializer.serialize(entity) def deserialize_entity(self, context, entity): if not self.entity_serializer: return entity return self.entity_serializer.deserialize(entity) def serialize_context(self, context): ctx = context.to_dict() pfr = profiler.get() if pfr: ctx['trace_info'] = { "hmac_key": pfr.hmac_key, "base_id": pfr.get_base_id(), "parent_id": pfr.get_id() } return ctx def deserialize_context(self, context): trace_info = context.pop('trace_info', None) if trace_info: profiler.init(**trace_info) ctx = MistralContext.from_dict(context) set_ctx(ctx) return ctx class AuthHook(hooks.PecanHook): def before(self, state): if state.request.path in ALLOWED_WITHOUT_AUTH: return if not CONF.pecan.auth_enable: return try: auth_handler = auth.get_auth_handler() auth_handler.authenticate(state.request) except Exception as e: msg = "Failed to validate access token: %s" % str(e) pecan.abort( status_code=401, detail=msg, headers={'Server-Error-Message': msg} ) class ContextHook(hooks.PecanHook): def before(self, state): context = MistralContext.from_environ( state.request.headers, state.request.environ ) set_ctx(context) def after(self, state): set_ctx(None) def create_action_context(execution_ctx): context = ctx() security_ctx = lib_ctx.SecurityContext( auth_cacert=context.auth_cacert, auth_token=context.auth_token, auth_uri=context.auth_uri, expires_at=context.expires_at, insecure=context.insecure, is_target=context.is_target, is_trust_scoped=context.is_trust_scoped, project_id=context.project_id, project_name=context.project_name, user_name=context.user_name, redelivered=context.redelivered, region_name=context.region_name, service_catalog=context.service_catalog, trust_id=context.trust_id, ) ex_ctx = lib_ctx.ExecutionContext(**execution_ctx) return lib_ctx.ActionContext(security_ctx, ex_ctx) mistral-6.0.0/.testr.conf0000666000175100017510000000067413245513261015315 0ustar zuulzuul00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \ ${PYTHON:-python} -m subunit.run discover -t ./ ./mistral/tests/unit $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list test_run_concurrency=echo ${TEST_RUN_CONCURRENCY:-0} mistral-6.0.0/.zuul.yaml0000666000175100017510000000701313245513261015162 0ustar zuulzuul00000000000000- job: name: mistral-devstack-base parent: devstack timeout: 7800 vars: devstack_plugins: mistral: https://git.openstack.org/openstack/mistral heat: https://git.openstack.org/openstack/heat devstack_services: heat: True h-api: True h-api-cfn: True h-api-cw: True h-eng: True tox_environment: IDENTITY_API_VERSION: 3 PYTHONUNBUFFERED: 'true' MISTRAL_USE_MOD_WSGI: True MISTRAL_RPC_IMPLEMENTATION: oslo MYSQL_ROOT_PW: secretdatabase required-projects: - openstack-dev/devstack - openstack-infra/devstack-gate - openstack/heat - openstack/mistral - openstack/python-mistralclient - openstack/mistral-tempest-plugin - job: name: mistral-rally-task parent: mistral-devstack-base run: playbooks/rally/run.yaml vars: devstack_plugins: rally: https://git.openstack.org/openstack/rally tox_environment: DEVSTACK_GATE_TEMPEST_LARGE_OPS: 0 DEVSTACK_GATE_EXERCISES: 0 RALLY_SCENARIO: task-{{ zuul.project.canonical_name }} DEVSTACK_GATE_NEUTRON: 1 required-projects: - openstack/rally - job: name: mistral-docker-buildimage parent: publish-openstack-artifacts run: playbooks/docker-buildimage/run.yaml post-run: playbooks/docker-buildimage/post.yaml timeout: 1800 required-projects: - openstack/mistral # This job does not work. We can come back to it later. # - job: # name: mistral-ha # parent: legacy-base # run: playbooks/legacy/mistral-ha/run # timeout: 4200 - job: name: mistral-tox-unit-mysql parent: openstack-tox vars: tox_envlist: unit-mysql - job: name: mistral-tox-unit-postgresql parent: openstack-tox vars: tox_envlist: unit-postgresql tox_environment: { CI_PROJECT: "{{ zuul['project']['name'] }}" } - project: check: jobs: - openstack-tox-cover: voting: false - mistral-devstack - mistral-devstack-non-apache: branches: ^(?!stable/(newton|ocata)).*$ - mistral-devstack-kombu: branches: ^(?!stable/newton).*$ - mistral-tox-unit-mysql - mistral-tox-unit-postgresql # TripleO jobs that deploy Mistral. # Note we don't use a project-template here, so it's easier # to disable voting on one specific job if things go wrong. # tripleo-ci-centos-7-scenario003-multinode-oooq will only # run on stable/pike while the -container will run in Queens # and beyond. # If you need any support to debug these jobs in case of # failures, please reach us on #tripleo IRC channel. - tripleo-ci-centos-7-scenario003-multinode-oooq - tripleo-ci-centos-7-scenario003-multinode-oooq-container # Disable the rally job until it actually works # - mistral-rally-task: # voting: false gate: jobs: - mistral-devstack - mistral-devstack-non-apache - mistral-tox-unit-mysql - mistral-tox-unit-postgresql - tripleo-ci-centos-7-scenario003-multinode-oooq - tripleo-ci-centos-7-scenario003-multinode-oooq-container - mistral-devstack-kombu post: jobs: - mistral-docker-buildimage: branches: master experimental: jobs: - mistral-docker-buildimage: branches: master # This job doesn't work yet. # - mistral-ha: # voting: false